I am using the standard python app engine environment and currently looking at how one goes about uploading multiple large media files to Google Cloud Storage (Public Readable) using App Engine or the Client directly (preferred).
I currently send a bunch of smaller images (max 20 - between 30 and 100k on average), at the same time directly via a POST to the server. These images are provided by the client and put in my projects default bucket. I handle the requests images using a separate thread and write them one at a time to the cloud and then associate them with an ndb object. This is all fine and dandy when the images are small and do not cause the request to run out of memory or invoke a DeadlineExceededError.
But what is the best approach for large image files of 20mb+ a piece or video files of up to 1GB in size? Are there efficient ways to do this from the client directly, would this be possible via the Json api ,a resumable upload, for example? If so, are there any clear examples of how to do this purely in javascript on the client? I have looked at the docs but it's not intuitively obvious at least to me.
I have been looking at the possibilities for a day or two but nothing hits you with a clear linear description or approach. I notice in the Google Docs there is a way using PHP to upload via a POST direct from the client...https://cloud.google.com/appengine/docs/php/googlestorage/user_upload...Is this just relevant to using PHP on app engine or is there an equivalent to createUploadUrl for python or javascript?
Anyway, I'll keep exploring but any pointers would be greatly appreciated.
Cheers
Try BlobStore with Cloud Storage or the Image Service
Related
How can I allow the client to upload an image in the browser, and then I upload it to Amazon S3? I have been looking around a lot and have found no resources explaining how to do this.
Are there any tutorials that I could follow?
Are there any libraries that I should use for this?
I am using AngularJS on the frontend and Node.js on the backend.
In short, look for two different tutorials. One for uploading from a client to a server, one for uploading from a server to S3.
StackOverflow discourages linking to specific tutorials, but there are lots of them out there, so it shouldn't be too tricky to track down.
For the client-to-server, you'll want to do a basic HTML form upload up to the server, then snag the data. You can temporarily write it to your file system (if you're on Linux, the /tmp directory is a good place to stash it).
After that, just upload from your server to S3. Amazon itself has some good documentation on that. The s3 package for Node also has good examples: https://www.npmjs.com/package/s3
It's also possible to go straight from the browser to S3, which may be better depending on your use case. Check this out: http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album.html
You're going to need the AWS SDK for node. Then they have a pretty comprehensive developper guide. You may have to read up on credential management, too.
the procedure would be as follows
user uploads the image from browser to your server (I'd recomend plain form upload, unless you feel ok with uploading via ajax)
then your server uses the SDK to save to S3
you display back info to the user (link to the image, upload status ?).
you could also use pre-signed posts but that seems more advanced and I haven't seen info on it for Node.
I have found several threads where the same question has been asked, but I suspect that the top answer in most of them are outdated.
My problem
I have a frontend javascript app communicating with an oauth authenticated API. This API contains files I want my users to be able to download. Because the API requires authentication, I can not show the user a regular link in order to initiate the download.
Instead, I have to send a XHR-request to initiate the download (so I can add the necessary authentication header).
In my case, the files will usually be pretty large (>1GB), so keeping them in memory is not a solution.
Reading this article, I'm wondering if it might be possible to stream the file from the API to the filesystem through the Javascript file API. Does anyone have a suggestion on how I might make this work?
Isn't this a pretty common problem in 2016?
It is somewhat hack-ish, but I've used it before and it works wonders.
From Downloading file from ajax result using blob
I want to stream data from my Dropbox to webpage in real time, but don't know how to do it.
It's usually a bad idea because Dropbox can throttle speed, stop sharing file when using from many locations.
You can install Dropbox to your server and sync some folder with your Dropbox:
https://www.dropbox.com/install
And to stream from your local folder is easier task.
But if you really need to get files from Dropbox real-time, you can use their API. They've got libraries for many languages. For example this one is for PHP, also tutorial there:
https://www.dropbox.com/developers-v1/core/start/php
I have a mobile phone application where I allow users to take photographs and persist the picture file to S3. I am using a Node.js server to do the persistence to S3.
On the node.js server side I have exposed an API which does the following:
Constructs an in memory buffer of file data sent by the client (mobile device which took the photograph).
Persists this data in local file system.
Creates a thumbnail from original picture file in local fs (Thumbnail creation is another requirement I have). Persists thumbnail to local fs. I use easyimg for thumbnail creation.
Persists original pic and thumbnail from local fs to S3
I have few questions in the process above:
The above process works for me, however I want to avoid steps 1. and 2. (the in memory buffer creation and persistence to local file system) as node is just acting as a conduit for the data to be streamed to S3. Can someone give me pointers to code to do the same?
Is easyimg a good way to create thumbnail? What are the advantages and disadvantages of using this module? Is there something better in terms of performance and ease of use?
I am ok with some delay in thumbnail generation. So the thumbnail file need not be persisted along with the original file in S3. Does this help in any way?
I'm looking to confirm or refute the following:
For what I have read so far it is not possible to write a web application with only javascript -- no server side logic -- served from Amazon S3 that also store data only to S3 if you need to have multiple clients with private data per client.
The issue I see is the Authorization header required for every Ajax call that would force me to put the signature (and my AWS id) right there in the page source for everybody to see.
Is that correct or I misunderstood the docs?
Are there workarounds?
In short, you are correct.
If your AWS key ends-up in any way on the client-side, you are in trouble.
A possible solution is, of course, to have the user specify their AWS key for storing their data.
I'm working on a project that will do something similar to this, mine will have the users use their own S3 which I will store in HTML5 localStorage. It's a bit tricky, but I've got the basics working.
It involves making an Javascript program that replicates itself into S3, gets itself from S3 and then transfers credentials and control into the S3 loaded version.
I'm using the excellent SJCL to do signature generation and jQuery's ajax functionality for the parts I can.
My work simply initializes the S3 side application and does a test PUT/GET sequence to S3. I also rewrote a JQuery postMessage plugin (which StackOverflow won't let me post for lack of rep) for communicating between my frames.
In my case, I'm trying to fit the entire application into a single HTML file so that I don't have to do as much initial transfer into S3, but perhaps there are other ways to work this out.
iBeans offers a way around this without having to write any server side code. There's an S3 iBean (a developer is working on it to be released in the next few days - watch the mulesoft blog for an announcement) and you can access it right from your javascript. The iBean itself runs on a server so you wouldn't need to store your keys in the javascript.