Hi, I have divided this in sections, hoping it helps.
Temp-dir
I have noticed, but did not tried the new temporary directory function, because I can't control when it's deleted. My impression is that it will be deleted when I get out of the function that created it, but since I redirect to an updating function that loads the resulting PDF into a DB, I am afraid that the result will be deleted before I can upload it. So I manage the directories myself, creating and deleting it when I know all processes are done with them. Let me know if I assumed wrong about deletion of the temp-dir.
Case #1
I have attached an .fo that contains http://... links to images. When I use this with the fop module BaseX freezes (let $pdf := fop:transform($fo)).
The restxq from which it is called in non-updating. The update is performed after a redirect, where the pdf saved on the file-system is uploaded.
This worked on 7.6.
Case #2
Other examples of what we were doing was process a document through .xsl. When the .xsl encountered a reference to a GUI item we called a restxq that would grab the string for the GUI item based on multiple parameters: phone model, os version, carrier, locale...
We do the same for images:
doc(http://localhost:8984/get-image-for?id=image-1&model=3083&carrier=smb&locale=ja-jp&os=android40...). This way we can create a link to the right image in the correct dpi for the user's model (get-image accesses our config file saved in a DB and returns the link to the image that matches the parameters). Now I resolve links to images in the xquery before applying the .xsl. In that case it's not such big issue because as long as I am processing indexed content I can pre-process it in xquery. It's more of an annoyance where I have to rewrite most of my code to match the new paradigm (I WAS RELYING HEAVILY on REST and RESTXQ). The rewriting is almost completed and I have not hit anything that cannot be done in a different way for indexable content. As a bonus, I seem to have gain in performance. The real issue we still have is when accessing raw files, see case #1, #3 and #4.
Case #3
validate:xsd($xml-file, $xsd) where $xsd is at
http://localhost:8984/rest/AppResources/xsdname.xsd gives me a headache too. I think it can't load the included .mod. So I have created a function called bypassvalidateissue ( ;-) ) that saves the dtds or xsds on file before applying them, and then deletes them.
I need the .xsd in the database to be able to run an query process that creates documentation of the structures and elements when the XSDs get updated, so I want to keep them in the DB that holds our applicative resources. Our .xsd are saved as indexable, but we have the same issue with .dtds that are saved as raw content. A case that failed for me was trying a validate:dtd($xhtml-file, $html-transitional.dtd) when the dtd is in my AppResources DB. I think it's because the DTD is modular.
The validations using rest access worked for all cases in 7.6. Now I have to bypass all accesses by saving stuff on the file-system, processing, and deleting.
Case #4
I have 3 .xsl saved in my Application Resouces DB. process-menu.xsl, process-topic.xsl.
where both include common-inline-elements.xsl. xslt:transform($xml-file, http://.../rest-process.xsl) which used to work now fails me, and I can't just read the .xsl as an xml node, because it doesn't know to resolve the includes. My includes used to work when referenced as <include
http://localhost:8984/rest/common-inline-elements.xsl/>. In 7.4, I think they also worked when a relative path was used... but I can't remember for sure.
Summary
In summary, I don't feel as RESTED as I used to.
We use BaseX to create content from a single source of topics that get customized for different phone models that have different interfaces and dpi that run on different os, on different carriers that have their own custom needs, for different country that have different network set ups, etc. We also have adapted interfaces and navigation patterns for mobile/browser/pdf media targets. We make extensive use of queries ability to grab, aggregate and customize all pieces of content, all of that in 30 something languages.
We can't replicate some of these concurrent accesses in simple cases, but we are very careful with updating function, redirecting whenever necessary.
Having everything centralized in DBs is useful for faster deployment on different systems that use different OS (duplication of collection with high caps on Windows - we still have not changed our naming conventions), including a .WAR deployment that received users feedback and that doesn't allow for writing on the file system. Short term, we don't need the functions that write to the file system on the .WAR deployment, but if even 1 department asks for a PDF report, we could suffer from lack of read access to images (case #1).
Let me know if some of these cases are not clear, or if you need extra code samples.