Hi, Sorry I know this exists ...
"If an API supports chunking (when the dataset is too large) multiple calls need to be made to complete the process. XPathEntityprocessor supports this with a transformer. If transformer returns a row which contains a field * $hasMore* with a the value "true" the Processor makes another request with the same url template (The actual value is recomputed before invoking ). A transformer can pass a totally new url too for the next call by returning a row which contains a field *$nextUrl* whose value must be the complete url for the next call." But is there a true example of it's use somewhere? Im trying to figure out if I know before import that I have 56 "pages" to index how to set this up properly. (And how to set it up if pages need to be determined by something in the feed, etc). Thanks. - Jon