Christian,
I made little change in coding
This works w/o any exception.
This is super change. Thanks for implementing so quickly for me
let $list := for $rec in sql:execute ($conn,$sqlStmt) let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()}
On Thu, Jan 8, 2015 at 2:30 PM, Erol Akarsu eakarsu@gmail.com wrote:
Christian,
I have tested basex's latest snapshot on Oracle 11g.
Unfortunately, I am still getting out of memory exception. I am using like this. Is it correct use? I saw there is one entity Streamable data structure in basex. Will I have to use together with this latest change?
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Thu, Jan 8, 2015 at 11:09 AM, Christian Grün <christian.gruen@gmail.com
wrote:
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with
small
JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get
out of
memory exception easily. I guess Basex first read whole 40 million rows
into
memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by
and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin :=
file:append-text($outputfile,"<RECORDS>","UTF-8")
let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün <
christian.gruen@gmail.com>
wrote:
Hi Erol,
I have several xml document index keeping products features and have
one
master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert
requests
collected. But with huge size of my index, the memory of machine
(35G)
is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one
so
that process can finish?
Erol Akarsu