Upload files asynchronously then save data about it - javascript

I am building a way for users to upload tracks with information about that track but I would like to do this asynchronously much like YouTube does.
At the moment there is an API endpoint of tracks that accepts a POST request with the uploaded file and all the meta data. It processes the track, validates everything and will then save the path to the track and all of its meta data in the database. This works perfectly but I am having trouble thinking of ways to do this asynchronously.
The user flow will be:
1) User selects a track and it starts uploading
2) A form to fill in meta data shows and user fills it in
3) Track is uploaded with its metadata to the endpoint
The problem is that the metadata form and the file upload are now two separate entities and now the file can finish uploading before the metadata is saved and vice-versa. Ideally to overcome this both the track and metadata would be saved in the browser as a cookie or something until they are both completed. At that point both would be sent to the endpoint and no changes would be required at the back end. As far as I am aware there is no way of saving files client side like this. Oh apart from that filesystem API which is pretty much deprecated.
If anyone has any good suggestions about how to do this it would be much appreciated. In a perfect world I would like there to be no changes to the back end at all but little changes are probably going to be required. Preferably no database alterations though.
Oh by the way I'm using laravel and ember.js just in case anyone knows of any packages already doing this.

I have thought about this a lot few months ago.
The closest solution that I managed to put together is to upload file and store it's filename, size, upload time (this is crucial) and other attributes in DB (as usual). Additionally, I've added the column temporary (more like a flag) which would initially be set to TRUE and only after you would sent meta data it would be negated.
Separately, I've set the cron job (I used Symfony2, but in Laravel is all the same) that would run on every 15-30 minutes and delete those files (and corresponding database records) which had temporary = TRUE and exceeded time window. In my case it was 15 minutes but you could set it to be coarse (every hour or so).
Hope this helps a bit :)

Related

How can I suppress 404 error when a file is not found?

I have a list of webpages example.com/object/140, example.com/object/141, example.com/object/142, ...
and each page should have a particular background image example.com/assets/images/object/140.jpg, example.com/assets/images/object/141.jpg, ...
Some images are missing and then I use a default image. In that case, when I check if the image exists, I get a 404 error. I have already seen in several pages there isn't a direct way to avoid this problem.
Then I did the following: I created a service in the backend (C#) that checks if the file exists File.Exists(fileName);. That way I managed to avoid this error in my localhost. So far so good.
Now I published both my frontend and backend in two different services in Azure. The images are in the frontend but the file service is in the backend. My method does not work anymore because I can't access directly the frontend folders from the backend. One solution could be to make an http call from the backend to the frontend, but I think this doesn't make much sense, it's getting too messy.
One option could be to store in the DB a boolean with the (non)existence information, but I think this is prone to inconsistencies (if the boolean is not updated immediately when a new image is loaded or deleted, for example), even if I run a daily job to clean it.
Still another option could be to store the images directly in the DB and retrieve them together with the DTOs of the objects I'm loading in each particular page, but I guess that images that are shown only in the frondend should be stored in the frontend... shouldn't they?
Therefore:
a) Is any of these ideas acceptable? Is there a better way to avoid this error?
b) Another possibility: is there a way to access the frontend folders from the backend? I get a bit lost with the publishing and artifacts in Azure and I don't know if I could do it somehow.
I'm not sure how you've built the frontend, but I'm assuming that the background images are set using CSS. It is possible to set multiple background images in the same rule, and the browser will load them all and display them one below the other - if the first one loads successfully, and isn't transparent, then that is the only thing the user will see. But if the first image fails to load - for example because it doesn't exist, the second image will be shown.
See this other answer for more details: https://stackoverflow.com/a/22287702/53538

Saving filename in DB after uploading to GCP Storage or using bucket.getFiles()

I've been searching in StackOverflow, but it seems that this question has not been asked yet. It's an architecture question about files being uploaded to GCP Storage.
TL;DR : Is there any issue using bucket.getFiles() directly (from a server), rather than storing each filename in my db, and then asking for them one by one and returning the array to the client ?
The situation:
I’m working on a feature that will allow the user to upload image attachements linked to a delivery note. This delivery note can have multiple attachements.
I use a simple upload button on my client (mobile device), and upload the content in GCP in a path/to/id-deliveryNote folder such as: path/to/id-deliveryNote/filename.jpg path/to/id-deliveryNote/filename2.jpg etc…
Somewhere else in the app the user should be able to click and download on each of those attachements.
The solution
After the upload being done in GCP, I asked myself how to read those files and give the user a download link to the file. That’s when I found the: bucket.getFiles() function.
Since my path to files are all below the same id-deliveryNote/ prefix, I leverage the usage of bucket.getFiles(prefix) and after the promise resolve can safely return to my user the list of links available.
The issue
I do not store the filenames in my deliveryNote table in my DB. Which can sound a bit problematic, relying on GCP to know the attachements of one deliveryNote. The way I see it is that, in my way I do not need to replicate the information in our DB (and possibly handling failure at two spots), and if I need those files I will at the ask GCP to give me their links. The opposed way of thinking is that, storing the names you will be able to list the attachements for the clients, and then generating the download link, when the user click a specific attachement.
My question is: Is there any issue using bucket.getFiles() directly (from a server), rather than storing each filename in my db, and then asking for them one by one and returning the array to the client ?
Some point that could influence the chosen method:
GCP costs per call difference ?
Invalid application data structure ?
Other things ?
There is no issue with using this method to return the link for the files to download. In the API documentation for this method - accessible here - they even show an example of returning files using prefixes as well. You just need to look out that Cloud Storage actually doesn't use real folders and only names that look like they are in folders - more details in this case here - so you don't mix up concepts when working with names and prefixes.
For the pricing point, you can get the whole pricing for Google Cloud Storage in this documentation, including how much each operation will cost - for example, it will cost you $ 0.02 per 50000 operations for object gets, retrieving bucket and object metadata - storing data, etc. After you check that, you can compare with your database costs as well, to check it if this point will impact you.
To summarize, there is no problem for you to follow this. The advantage of storing the names on Database, it's actually that even though you could have failure in two spots, it's more probable for you to face issues in only one place and this way the replication would be a great thing to have. So, you just need to decide which one fits you best.

Display result (image) of computation in website

I have a python script that generates a heightmap depending on parameters, that will be given in HTML forms. How do I display the resulting image on a website? I suppose that the form submit button will hit an endpoint with the given parameters and the script that computes the heightmap runs then, but how do I get the resulting image and display it in the website? Also, the computation takes a few seconds, so I suppose I need some type of task queue to not make the server hang in the meanwhile. Tell me if I'm wrong.
It's a bit of a general question because I myself don't know the specifics of what I need to use to accomplish this. I'm using Flask in the backend but it's a framework-agnostic question.
Save the image to a file. Return a webpage that contains an <IMG SRC=...> element. The SRC should be a URL pointing at the file.
For example, suppose you save the image to a file called "temp2.png" in a subdirectory called "scratch" under your document root. Then the IMG element would be <IMG SRC="/scratch/temp2.png"> .
If you create and save the image in the same program that generates the webpage that refers to it, your server won't return the page until the image has been saved. If that only takes a few seconds, the server is unlikely to hang. Many applications would take that long to calculate a result, so the people who coded the server would make sure it can handle such delays. I've done this under Apache, Tomcat, and GoServe (an OS/2 server), and never had a problem.
This method does have the disadvantage that you'll need to arrange for each temporary file to be deleted after an expiry period such as 12 hours or whenever you think the user won't need it any more. On the webpage you return, if the image is something serious that the user might want to keep, you could warn them that this will happen. They can always download it.
To delete the old files, write a script that checks when they were last updated, compares that with the current date and time, and deletes those files that are older than your expiry period.
You'll need a way to automatically run it repeatedly. On Unix systems, if you have shell access, the "cron" command is one way to do this. Googling "cron job to delete files older than 1 hour on web server" finds a lot of discussion of methods.
Be very careful when coding any automatic-deletion script, and test it thoroughly to make sure it deletes the right files! If you make your expiry period a variable, you can set it to e.g. 1 minute or 5 minutes when testing, so that you don't need to wait for ages.
There are ways to stream your image back without saving it to a file, but what I'm recommending is (apart possibly from the file deleter) easy to code and debug. I've used it in many different projects.

About Google Analytics collect data

First of all, Hi to all of you (I'm new here).
I'm having a look on how Google Analytics works as I'm gonna develop a similar tracking js to collect all the data I need for my websites and, as far as I can see, the ga.js script send all the data (maybe not all but a good part of it) with a get request with a 1x1 gif and all the parameters following.
Seen here: How does google analytics collect its data?
So, on the server side It seems the only way to "read" all these parameters is going to analyze server logging and then collect everything on my database?
Is this the best option to get users data?
I think, server logging could "switch file" every 2 hours so you can analyze that file of the past 2 hours and show "not that old" data to your graph!
Of course will never be "realtime" graph but a 2 hours delay could be acceptable, I think.
I think you can simply put a script (PHP for example) at the image path, then through the script return as a response the image, by doing this you can act in real time, since using a script you can get all the data that would be present in your server log.
If you want to try my solution I think a good point to start (in PHP) would be this to create the GIF image and then you can use data located in $_SERVER to start gathering data!

Saving dynamically created content to server & load from there

So I have this webpage that I'm making which allows people to create elements on the page on the fly. And I want to be able to save those elements to my server and whenever someone else reloads that page, the webpage will have those saved elements.
I'm not a good web programmer by any means, so take it easy with the web jargon xD
The user created elements are nested 's or lists. Those elements can be deleted at anytime as well.
So I was reading about saving them as JSON but how would I go about doing that as my 's, most of the top level ones will have the same class. Never worked with JSON before, so I'm a real noob at that.
Will the server file keep replacing itself with a brand new copy with each addition/deletion?
And I'd like to get a little help with showing the new elements without updating. On other users page. I read about AJAX real-time updating, like APE, but have no idea how to go about with that. (This is not really needed but would be a nice one to have)
If someone can guide me a little at least, that will be great. Thanks.
The best suitable way to accomplish this is by saving your objects attributes to a database, however other options include XML files etc..
The process of accomplishing it through database is:
If you want to save data to database then you will have to use a server side language like Php or Asp.net, so first step will be to have a database then an active connection to your database on your intermediate file (lets say 'data.php')
Then you need to code your data.php file so that it can take input(usually through GET or POST method) and it can save it to your database
Then you need to pass your data (objects attributes) through AJAX to data.php and save them to your database
On the main file you will have to check whether already some data exists for user, if yes then fetch it from database and display objects accordingly, otherwise set the objects preferences to default

Categories

Resources