I have some code that reads from an ini file in Javascript, using activex filesystem objects.
This isn't particularly efficient but does the job, reading the whole file into an array, appending any changes and writing back.
The problem i'm having is that another process, a C# XBAP application is reading from this ini file (using getprivateprofilestring) at the same time as I could potentially be trying to write to it in the JS.
The javascript fails as the file is locked, or part of it, and the file ends up getting corrupted or even totally cleared - as I am trying to write back the whole file each time.
Preferably, what I need is a way to determine if a file is locked in javascript, as the writes are not urgent and I want to let any reads finish first.
Just can't seem to find anyway of syncing these two completely seperate ways of accessing the file.
May be you could use try/catch. If you open the file for appending (OpenTextFile([filename],8)) it should raise an exception. Same should be true for writing/saving the file (if the file is locked try raises an exception).
Related
I have a text file that has some strings in it. I am trying to clear it to put something else in it. What is the right code?
I have tried File.Clear() but it keeps throwing errors.
Assuming you're using nodejs and not a browser (in a browser you have other means of persistence like localstorage, but not files), check this and remember, you use an in memory representation of the file, if you want an empty one you can create one anew, if you want to clear the contents on the disk you need to write an empty (or whatever content you need to write) on the disk.
I have been shown a trick - instead of having the data in datafile.json and load it with ajax, the data is encapsulated in a single object in a datafile.js such as var Data = { //all data goes here }. Then the datafile.js is just loaded as external script in the html head
It works well just as if the objects were loaded with ajax, are there any drawbacks to this?
JSON is JavaScript. (Back in the day, the idea was that you could simply eval it ...) Therefore, the file that you speak of is simply ... "a JavaScript assignment-statement, stored in a file."
The only potential issue with storing this as a separate file might be "a timing hole." The source-code must be separately retrieved. I'm not sure if the browser would wait to do that, so it might be possible for other JavaScript code to execute that does not see var Data because that block of code hasn't been retrieved and executed yet.
When you have "lots of invariant fixed data," I customarily put everything including the data into one ("yeah, it's big ...") JS file, so that I know "it's all there" before any of it tries to be executed. Yes, there are definite advantages to simply including fixed data directly into your source-file, as you're effectively doing here, but I'm not sure I see an advantage (and, I might see a hole ...) in using several JS files.
I want to build a javascript function that can look in specific file locations, and find if the file is there, and when it was put there. If it is no longer there, I want to see if I can get the time it was taken out (and when it was put in, if possible). Are there already written functions for doing this, or will I have to develop them myself?
My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?
I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).
My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.
I have a piece of javaScript that works (alongside html) to produce the GUI for a program written in C++. The program has to run for a long time (Sometimes 14/15 days, without monitoring).
The C++ and javaScript communicate by writing to/reading from a XML file.
After running the program for over 24 hours at a time, I've noticed the occasional javaScript error appearing 'someArray[...].name' is null or not an object.
Now: These are all arrays that are filled with information taken from the XML file, written by the C++. The contents of these arrays are refreshed every few seconds (To update information in the GUI 'live').
Question is: Could these errors be caused by an access/timer problem as in --> The javaScript starts reading a line from the XML just as the C++ swoops in and rewrites that line. Therefore information is parsed into the javaScript arrays with some illegal characters (etc) which when accessed throws the errors?
Hope that all makes sense. Thanks.
Your suggestion seems to provide a plausible explanation of what's happening.
You're probably seeing a race condition.
To fix it, you could implement a synchronization mechanism between C++ and JS.
The simplest form of sync I can think of is creating a second file each time C++ writes to your main XML file (this file acts as a lock). JS waits for the lock file to disappear before reading the XML. The same is done on the C++ side.
Sample code:
C++:
while(programRunning) {
do stuff;
// Now it's time to write XML
while("lockCpp.txt" exists)
; // Do nothing, JS is reading
create file "lockJS.txt";
write to xml;
delete file "lockJS.txt";
}
JavaScript:
while(programRunning) {
do stuff;
// Now it's time to read XML
while("lockJS.txt" exists)
; // Do nothing
create file "lockCpp.txt";
read xml;
delete file "lockCpp.txt";
}
This should in practice eliminate race conditions (though some are theoretically, possible, but unlikely).
Should JS not be allowed to write to the file system, then you could remove one of the lock files (lockCpp.txt) and, if the reading on the JS side is normally faster then the writing, it should still eliminate most conflicts.
EDIT after comment:
If you only have access to JS, you could check that the XML document is complete when reading, e.g. the root element is correcly matched by a </rootElementName> at the end.
That will ensure the file write is complete provided C++ doesn't do writes at random locations, but always rewrites the whole document.
Another route would be checking the file is not changing over time. If C++ only sporadically writes to XML, you can read it a few times over a few, say, seconds, and, if unchanged, use the read value. If changed, keep waiting.
HTH