indexing slow-down (i.e., speeding up programs)
Richard Gaskin
ambassador at fourthworld.com
Sun Apr 13 17:25:00 EDT 2003
John Vokey wrote:
> Here's my problem: initially, the routine takes much less than 1
> second per hundred words (which, when multiplied by the number of words
> remaining to index, results in an estimate of some small fraction of an
> hour to do the whole task). However, it rapidly (as an exponential
> function) slows down, so that by the time it reaches the middle of the
> alphabet (M), it takes many minutes per 100 words, and an
> ever-increasing time estimate for the remaining items (now over 60
> hours!). Clearly, either the ``repeat for each'' command for the first
> dictionary or the ``get line pointer...'' for the second (or both)
> get(s) slower and slower as I progress through the dictionaries,
> presumably because to do one or the other (or both), metacard counts
> carriage returns from the beginning of the dictionary.
The "get line pointer" is the part that won't scale well, as each time
through the loop it needs to count the number of returns until it reaches
the number you're looking for.
One way to speed access to an element is to use an array. They take longer
to load and require slightly more memory, but are _much) faster to access
specific elements.
--
Richard Gaskin
Fourth World Media Corporation
Developer of WebMerge 2.2: Publish any database on any site
___________________________________________________________
Ambassador at FourthWorld.com http://www.FourthWorld.com
Tel: 323-225-3717 AIM: FourthWorldInc
More information about the metacard
mailing list