[Yaffs] problem with object (file) creation

Michael Fischer fischi@epygi.de
Fri, 8 Oct 2004 13:54:47 +0200


Hello everyone,
i am new to the list as well as to the linux vfs/fs stuff and i am looking
for any help/comments on the following:
In some cases when copying a big number of files i got complains from the cp
command that a newly created file was a directory thus not beeing able to
write data into it. The following steps would reproduce the behavior:
- delete a directory with many files and subdirs recursively.
- have some process which still has a reference to the dir or some file open
(e.g. cd /test; rm -rf /test)
- move or copy back the deleted dir-tree from some backup location.

Debuging it further i found that the yaffs layer deletes objects which due
to references still exist in the inode cache of the vfs layer. Looking into
the code i found that yaffs_fs.c/yaffs_mknod => __yaffs_mknod =>
yaffs_FindNiceObjectBucket returns the object id . Depending on
YAFFS_NOBJECT_BUCKETS you probably get an id of a recently deleted object.
When afterwards calling yaffs_get_inode(iget4) you get an possibly allready
existing inode (inode reference count > 1). This inode doesnt get
initialized so attributes like directory are still valid and cause the
described behavior.

so a small patch (read: fast dirty hack) of yaffs_mknod (see below) does
solve the problem in my case:

doitagain:
	obj = __yaffs_mknod ( parent, dentry, mode, rdev);
	if(obj)
	{
		inode = yaffs_get_inode(dir->i_sb, mode, rdev, obj);
		T(YAFFS_TRACE_OS,(KERN_DEBUG"yaffs_mknod created object %d count =
%d\n",obj->objectId,atomic_read(&inode->i_count)));
		if (atomic_read(&inode->i_count) > 1) {
			iput(inode);
			yaffs_DeleteFile(obj);
			goto doitagain;
		}
		d_instantiate(dentry, inode);
		error = 0;
	}
	else
	{

ok, as mentioned before i am new to all this therefore i am of cause unsure
if i havnt overseen something or this is fixed in later versions or how a
clean patch could look like.

Thanks a lot for any comments and best regards,
Michael