What I am doing is saving and retrieving lot of Images on the client.
(Now indexedDB seemed to be overkill for this simple job, but since it was the only Cross-Browser solution with no limit(like localStorage), I had to use it ... and it works)
This is what my db looks like:
(more specific, the only objectstore of my db)
# | key(timeID) | value
0 | 812378123 | {data:¨....¨, tnData:¨...¨, timeID:812378123}
1 | 912378123 | {data:¨....¨, tnData:¨...¨, timeID:912378123}
2 ....
KeyValue is a unique TimeID, Data contains the Image as text and tnData contains the thumbnail of this Image as text
(when canvas.toBlob() is ready I will switch to that)
To retrieve a Image I just use store.get(id)
Now all of that works.
But now I just want to load the Thumbnail(¨tnData¨) - but NOT the real Image (¨data¨ which can be quite big).
So I hope there is something like store.get(id, ¨tnData¨) ...
But I did not found it, so far.
Does anyone know of a easy way of doing this, without having to rework my db?
Thanks in advance ... and sorry if I am not in the right place or have broken some other rule ... first question for me ;)
Does anyone know of a easy way of doing this, without having to rework my db?
No, IndexedDB doesn't let you return only part of an object.
Although reworking your db might not be too hard. You could break it into two object stores, one for thumbnails and one for whole images. Then you can query for specifically the one you want.
If you're often getting both at the same time, you'll have to do some benchmarking and see if it actually is faster to break them up, or maybe it could even make sense to duplicate thumbnails (one object store like you have now, and one with just thumbnails).
Related
I've been researching ways to retrieve orientation information from a JPEG file in pure JavaScript.
An excellent way to get this information is outlined in this SO answer. Essentially one reads the entire file using readAsArrayBuffer and then processes it for the required information.
However, is it really necessary to read the whole file to retrieve EXIF information? Is there an optimization whereby one can read a subset of bytes when doing this?
For instance, this SO answer seems to suggest the first 20 bytes are good enough for the job. However, the former answer's writer himself asserts that he removed the slice statement because sometimes the tag came in after the limit (he had originally set it to 64KB, i.e. reader.readAsArrayBuffer(file.slice(0, 64 * 1024));)
So what's a rule of thumb one can use when programming this sort of a thing? Or does one not exist at all? I want to write code where performance doesn't get heavily affected by the size (in bytes) of file uploaded by a user. That is my goal.
Note: I've tried Googling this information as well, however haven't found anything meaningful.
Till a more seasoned expert chimes in, I've settled for reader.readAsArrayBuffer(file.slice(0, 128 * 1024));.
My scenario is this: I have a very small array in my js file. When the page loads up, I have a function that loops through the array, and generates an li element for each item in the array, displaying it's name and price in the li. The array is constructed like this:
var gameList = [
{ name: "", value: 0.00},
]
Secondly, I have a simple form on the page that allows me to add new items to the array, and using localStorage, it's possible for me to keep a dynamically updated array. I push new items into the array (gameList), then at the end of the session I set it using localStorage.
localStorage.setItem("updatedGameList", JSON.stringify(gameList));
I have a couple of lines at the start of my code that sets my original array 'gameList' to be equal to the locally stored, updated game list.
var retrievedData = localStorage.getItem("updatedGameList");
gameList = JSON.parse(retrievedData);
So this is fine for now, but the growing array - which I want to keep and maintain - is only available in this browser, on this machine.
So, my question is, can I send this locally stored data somewhere? Maybe my personal domain? (Which is where I will host the app when it's finished) That way I could then reference it properly in my js file so that the data is always available? Maybe the array could have it's own js file?
I realise that this may not be the best way to be handling what is essentially a database. But I'm only part way through an online course and I'm using the tools that I have to make this work.
And lastly, in terms of maintenance of the array, is there any way to send it back to sublime in the form a .js file? I know this could be a crazy question. The updated array will become pretty big, maybe 200 items eventually, and it would be much easier to maintain from within sublime.
Thanks for your time, and apologies if part of this request is ridiculous!! :)
I have just been reading about AJAX, and thought maybe there's a way to send the updated array as a json file to somewhere(!) on my website, and then request that same file at the start of each new session, so I'm always working with, and saving, the latest updated array.
Thanks for reading, and hopefully you have some answers! :)
Although not quite what I was looking for - essentially some way of automatically getting the new array, sending it somewhere more secure than local storage, then referencing the new array to give me the most up to date starting point each time (and all with just javascript) - the 'dirty' way suggested below turned out to be sufficient for now until I start using databases.
From Kirupa, over at the forums:
Not a ridiculous question at all! You can send your own data anywhere you want, but it will require some level of server-related code. The easiest way to send data back and forth is through JSON, and you can convert your array into a JSON format easily using something like the following:
var jsonData = JSON.stringify(myArray);
From here, you can send this data to a database, to another web site, or to your e-mail server. If you want something really quick and dirty, you can literally just copy the contents of your JSON-ized array using the Chrome Dev Tools, save it on disk as a .js file, and reference it again in your app. That is a manual way of doing something that you don't really care about automating.
The best solution is to store this in a database. They've gotten easier to deal with as well. Firebase is my go-to for things like this, and this video might give you some ideas: https://www.youtube.com/watch?v=xAsvwy1-oxE
I have an Angular app pulling data from a REST server. Each item we pull has some "core" data - what's needed to display it's basic representation - and then what I call "secondary" data, comments and other things that the user might want to see and might not.
I'm trying to optimize our request pattern to minimize the overall amount of time the user spends looking at a loading spinner: Pulling all (core/secondary) data at once causes the initial request to return far too slowly, but pulling only the bare essentials until the user asks for something we haven't requested yet also creates unnecessary load time, at least inasmuch as I could've anticipated them wanting to see it and loaded it while they were busy reading the core content.
So, right now I'm doing a "core content" pull first and then initiating a "secondary" pull at the end of the success callback from the first. This is going to be an experimental process, but I'm wondering what (if any) best practices have been established in this situation. (I'm sure a good answer to that is a google away, but in this instance I'm not quite sure what to google - thus the quotation marks in this question's title)
A more concrete question: Am I better off initiating many small HTTP transactions or a few large ones? My instinct is to do many small ones, particularly if I can anticipate a few things the user is likeliest to want to see first and get those loaded as soon as possible. But surely there's an asymptote here? Or am I off-base in this line of thinking entirely?
I use the same approach as you, and it works pretty well for a many-keyed, 10,0000+ collection.
The collection is paginated with ui.bootstrap.pagination, only a maximum of 10 items are displayed at once. It can be searched on title.
So my approach is to retrieve only id and title, for the whole collection, so the search can be used straight away.
Then, as the items displayed on screen are in an array, I place a $watch on that array. The job of the $watch is to go fetch full details of the items in the array (secondary pull), but of course only when the array is changed.
So, in the worst case scenario, you are pulling the full details of only 10 items.
Results are cached for more efficiency. It displays instant results, as the $watch acts as a pre-loader.
Am I better off initiating many small HTTP transactions or a few large ones?
I believe large transactions, for just a few items (the ones which are clickable on the screen) are very efficient.
Regarding the best practice bit: I suppose there are many ways to achieve your goals; however, the technique you are using works extremely well, as it retrieves only what is needed, and only just before it is needed.
Besides, it is simple enough to implement.
Also, like you I would have thought many smaller pulls were surely better than several large ones. However, I was advised to go for a large pull as a comment to this question: Fetching subdocuments with angular $http
To answer you question about which keywords to search for, I suggest:
progressive loading
An alternative could be using websockets and streaming loading: Oboe.js does this quite well:
http://oboejs.com/examples
I can't find out the keyword I'm looking for. When I google anything with URL encoding or storing data, include data, whatever, I get all kinds of results except what I'm interested in. This is the only website I could find off the top of my head that shows what I'm looking for:
http://www.pathofexile.com/passive-skill-tree/AAAAAgMA37CCEEGWBUKusyycwbTk7HYRfq9JsgLjB6Vr230Y7IpzU8BU5oERUDGIeQMI9It6EHQOXEV-Va6X9JeVUlOPpkSrPV8EB0yzLR-NGeAS3Yy1heZM2V8ucA==
after tree/ it has a long code that pretty much is full of data. What should I look into to be able to do something like that? Is one supposed to create their own method according to what they need? Or is there a way one can just take one super long text and have a library encode it to make it smaller for the URL and then decode it when it loads?
I require tons of numbers, around 100. I figured it would be something like this, first off use a symbol to separate each 'variable', in this case let's use '-' and do something like this:
www.url.com/tree/1-1-1-0-3-2-1-3-4-5-2...total of 100 numbers..1-0-2, but then it gets encoded to be much smaller to something like
www.url.com/tree/xDgdmFdmnDfjSDfjSFdKflWepLS and this url gets decoded once loaded and the data retrieved and used behind the scenes.
Is there an easier way of doing this, or does one have to do it manually depending on their needs? By easier I mean, a way of encoding it, or does one have to do the encoding themselves? For example, make it so if there are more of the same numbers next to each other then it takes them and transforms them into letters, let's say there are five 3's next to each other, it would use the letter c to show what the number is, and a capital letter for the number of times it's repeated, so cE would mean five 3's in a row.
My question is, is there a way to encode it or do I have to think of a way to encode it myself like I was writing in the example?
Any information you have related to this subjecte is GREATLY appreciated!! Thanks so much in advance for taking the time to read all this and reply, sorry to bother
You are looking to base 64 encode data.
Hoping someone can spot the error, because I'm having trouble
Alright, I built my own JSON.stringify for just custom large objects. It may not be exactly to specification for some edge case things, but's only meant for stringify on large objects that I'm building myself.
Well, it works, and works well for most objects, but I have an Object I'm trying to stringify and it's failing and printing this before exiting:
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
undefined
Not very helpful. The object is fine because the regular call to JSON.stringify(object) works fine, and when I iterate over the object with for (var x in obj) if (obj.hasOwnProperty(x)) { myStringify(obj); } that works fine, but if I call it on the top level of the object, it goes to hell... It doesn't really make sense to me, and the only thing I can think of is the level if recursion is somehow breaking something...
The Parser : https://gist.github.com/958776 - The stringify function I'm calling
ObjectIterator.js : https://gist.github.com/958777 - Mostly to provide the asynchronous iteration
Edit So, I iterated over the object one level deep and compared the resulting string to the string of JSON.stringify(sameLevelDeep) and they're equal. Since the output is equal, I'm not sure that it's how I'm parsing something, but possible that it's such a large object or the amount of recursion is so high?
Edit 2 So, I "fixed" the problem, I guess. Instead of every 25th iteration being pushed to the next event loop, I push every fifth. I'm not sure why this would make a difference but it does... I guess the question is now "Why does that make a difference"?
Okay well, beyond it being a very specific question helping a very specific person, I would like to take this to a different place, that might also remove your problem and maybe help others.
Since you are not specifying why you are going through this process, I will have to break it down and guess -- and provide a solution for each guessed idea.
1. (Browser) You are trying to use JavaScript to crunch data, and provide the user with a result
Downloading at least several megabytes of raw data ("some of these objects are 5-10million characters") on a webpage to process and display a result is far from optimal, you should probably be doing this operation on the server side and download the pre-calculated result.
Besides, no matter what you are doing, JavaScript does not support threads.
setTimeout(1, function() { JSON.stringify(data); }); shouldn't be much different from what you are doing.
2. (Browser) You are trying to display the downloaded content
You should attempt downloading smaller chunks instead of the whole 10+ million character content using the built-in JSON.stringify method.
3. (Non-browser) You are trying to use JavaScript for an application that requires threading
You should consider using a different programming language for this application.
In summary
I think you are climbing the wrong mountain, you can achieve the same thing walking around it without breaking sweat. If you want to climb a mountain for kicks, there are mountains out there that need it -- but it's not this one.
Translation: Work on the architecture to obsolete the obstacle instead of trying to solve it, if you want to solve a problem there are problems that need a solving -- but it's not this one.