[Yaffs] Re: performans off YAFFS1 on NOR flash
Peter Barada
Peter.B at LogicPD.com
Fri Oct 14 19:32:53 BST 2005
On Thu, 2005-10-13 at 14:10 +1300, Charles Manning wrote:
> The main reason for this is that YAFFS is primarily designed for NAND, not
> NOR. NOR can be made to work, but there have always been expected to be
> performance issues.
>
> All, or at least most of, the YAFFS NOR performance issues could be made to go
> away with some effort. Since YAFFS's niche is mainly the larger (32MB+) NAND
> array, little effort has been put into tuning YAFFS for NOR.
>
> Most of the performance issues on NOR boil down to the slow write performance
> and erase performance in NOR. With NAND, the faster write and erase meant
> that gross locking could be used instead, allowing much simpler locking
> strategies.
>
> To make NOR work better, there are a few things that could be done:
> 1) Asynchronous erasure. Currently YAFFS just does its erasures synchronously
> because that is all that NAND supports. Secondly, NAND erasure is fast (2msec
> vs 1sec for NOR). Asynchronous erasure means two things:
> a.) Firstly decouple the erasure. Currently YAFFS erases flash as soon as it
> becomes available to erase. Instead, the dirty blocks could be put on a list
> for subsequent erasure.
> b) Use of erase suspend. This allows the erasures to be interrupted by other
> read/write activities, then restarted.
>
> 2) Running a background gc thread. Currently gc is done as a parasitic
> activity as part of any write operation. This means that any operations that
> cause garbage to be created slow down subsequent write operations. Again,
> this is not a big issue with NAND. Doing more gc while the file system is
> inactive (from a user/application point of view) would give the applications
> a better run at writing etc since less gc would be done during the actual
> writes.
>
> 3) The use of finer grain locking could potentially improve the situation for
> NOR too. This would allow multi-process reads while writing is blocked by
> gc/writing.
>
> 4) Tuning gc for NOR rather than NAND.
>
> While all these things **can** be done, I would not want to do anything that
> impacts on YAFFS' NAND performance, since YAFFS is primarily a NAND fs.
>
> More below:
>
> On Thursday 13 October 2005 12:04, Peter Barada wrote:
> > I started with YAFFS from CVS back on 2005-05-26, and have made YAFFS
> > work on a 547x running linux-2.4.26 targeting Intel strata-flash (32Mb,
> > 0x20000 block size), using 512 byte chunks and creating a spare area at
> > the end of each flash block so I get 248 chunks per flash block
> > (131072 / (512 + 16)), and I have it all working.
>
> > What I'm finding however is that its performance as a filesystem can at
> > times be abysmal. I'm not sure why, but I'm finding cases where the
> > gross lock is held in excess of a *minute* which causes other processes
> > that access the filesystem to block. This was observed while copying
> > 2.3MB of data from /lib into the flash, and at the same time doing a
> > 'du' of the flash filesystem.
> >
> > To make matters worse, the code in yaffs_GrossLock() uses down() instead
> > of down_interruptible() which makes it impossible to interrupt any
> > processes waiting for the lock.
> >
> > 1) Has anyone patched Yaffs to use down_interruptible() instead of
> > 'down'?
>
> My POSIXness is not the best by any means, but I have a hunch that regular
> file writes (as opposed to special character devices) may not be interrupted.
I'm not sure about POSIX, but using raw 'down' is frowned upon in the
kernel since you can end up with processes that take quite a while to
respond to signals(if at all) since the 'down' is not interruptible.
If I change the lock to use down_interruptible, where in yaffs_fs.c do I
have to be *very* careful about exitting with an -ERESTARTSYS?
> >
> > I've instrumented yaffs_grossLock/yaffs_grossUnlock to discern where the
> > time goes, and turned nShortOpCache to zero
> >
> > [root at localhost root]# cp /lib/libc* /mnt/flash-user/junk/
> > yaffs_GrossLockPriv: blocked by yaffs_file_write:806
> > yaffs_GrossLockPriv: blocked by yaffs_file_write:806
> > yaffs_GrossLockPriv: blocked by yaffs_file_write:806
> >
> > [root at localhost root]# rm /mnt/flash-user/junk/*
> > yaffs_GrossUnlock: lock held by yaffs_unlink:1053 for 4 jiffies
> > [root at localhost root]# cp /lib/libc* /mnt/flash-user/junk/
> > yaffs_GrossUnlock: lock held by yaffs_file_write:806 for 78 jiffies
> > yaffs_GrossLockPriv: blocked by yaffs_file_write:806
> > yaffs_GrossLockPriv: yaffs_readdir:871 wait 21297 jiffies by
> > yaffs_file_flush:488
> > [root at localhost root]#
> >
> >
> > Notices that the file flush holds the lock for over three *MINUTES*.
>
>
> This seems rather odd. If there was no cache (nShortOpCache==0), then all that
> gets done is updating the object header. This is only one write + whatever
> gc.
>
> The other part of this is that it is hard to determine what is also going on
> in the mtd layer. Any yaffs locking will get stuck behind any locking on the
> nor device.
Actually I've modified the lock/unlock code to be:
atomic_t n_locksRequested = ATOMIC_INIT(0);
atomic_t n_locksGranted = ATOMIC_INIT(0);
static unsigned long maxJiffies;
atomic_t sumLockTime = ATOMIC_INIT(0);
atomic_t sumLockTimeSquared = ATOMIC_INIT(0);
static void yaffs_GrossLock(yaffs_Device *dev)
{
unsigned long reqJiffies, diffJiffies;
T(YAFFS_TRACE_OS,("yaffs locking\n"));
atomic_inc(&n_locksRequested);
reqJiffies = jiffies;
down(&dev->grossLock);
atomic_inc(&n_locksGranted);
diffJiffies = jiffies - reqJiffies;
if (diffJiffies > maxJiffies) {
maxJiffies = diffJiffies;
printk("%s: wait %lu for lock\n", __FUNCTION__, maxJiffies);
}
atomic_add(diffJiffies, &sumLockTime);
atomic_add(diffJiffies*diffJiffies, &sumLockTimeSquared);
}
static void yaffs_GrossUnlock(yaffs_Device *dev)
{
int yieldToRequestor;
T(YAFFS_TRACE_OS,("yaffs unlocking\n"));
yieldToRequestor = atomic_read(&n_locksGranted) - atomic_read
(&n_locksRequested);
up(&dev->grossLock);
if (yieldToRequestor)
yield();
}
Which sees if on an up() if another thread is waiting, and if so, then
it executes 'yield()' to give the other thread a chance to get in. This
works much better for two processes trying to access the same YAFFS. I
haven't instrumented JFFS/2, but I have a feeling that the asynch
garbage collection would give better numbers.
And I get much better response. Note that I'm gathering statistics on
the amount of time a thread is blocked; for a run where I copy
over /lib/libc* (about 2.3MB of data) into the flash and another process
doung a 'du' over the flash, the results I see are:
N = 976
Sum(Xi) = 26785
Sum(Xi*Xi) = 10322947
MaxJiffies = 1397
Yielding an average of 13.27 jiffies, and a standard deviation of 99.16
jiffies which indicates that most of the time blocked for the lock is
minimal, with some rather large outlyers(note the maxJiffies) due to
garbage collection. Also the printk triggers:
yaffs_GrossLock: wait 68 for lock
yaffs_GrossLock: wait 72 for lock
yaffs_GrossLock: wait 144 for lock
yaffs_GrossLock: wait 164 for lock
yaffs_GrossLock: wait 1228 for lock
yaffs_GrossLock: wait 1393 for lock
Which indicates that the predominent number of locks are below 164, then
the outlyers when garbage colleciton kicked in. I'm tempted to modify
the collection to segregate times over 500 jiffies so the deviation
doesn't overwhelm the average.
I haven't instrumented JFFS/2, but I have a feeling that the asynch
garbage collection would give better overall numbers.
> >
> > 2) Is there anyway to more finely lock YAFFS instead of the gross lock
> > that exists now?
> > 3) How can yaffs_file_flush take *so* long is nShortOpCache is set to
> > zero so yaffs_FlushFilesChunkCache should take no time?
>
> Agreed. That does seem rather bizarre. Did the real-world time match the
> instrumenting?
Yes.
> > 4) Has anyone seen this type of problem on other NOR-based YAFFS
> > implementations?
>
> Feedback will be interesting...
>
> -- Charles
>
--
Peter Barada <Peter.B at LogicPD.com>
More information about the yaffs
mailing list