I am trying to allow users to upload large files without tying up my servers for an extended amount of time. I thought using dropbox as file storage might be a good solution. My plan is to use javascript to have the client-side connect directly to dropbox, so that my server is not affected.
I have been trying to find a current javascript dropbox api, but have not had much success. I tried using dropbox-js, but it seems that it is using an outdated version of the API as I get the following error with my current test: {"error": "You're using an older version of the Dropbox API with a new API key. Please use the latest version."}
Does anyone know a fairly simple way to accomplish this task?
Set up your application as a Folder app. If things go wrong, at least you won't blow up people's Dropboxes.
Follow these directions for obfuscating your API key and secret.
Use writeFile to upload the files, and then use makeUrl with the downloadHack: true option, then send the URL to your server.
You'll need the git version of dropbox-js to use downloadHack until the 0.7.0 release comes out.
Related
I'm going to make a project using the Google translate api and I'm thinking of uploading this project to a server and just sharing it with my friends. But unfortunately the Api Key that I will use in the project can be accessed clearly in the JavaScript file. This is a very bad situation. To prevent this, I have limited the Google Cloud Api and as far as I understand it is only allowed to be used on the links I allow. It cannot be used on other links. Now my main question is, is this method enough to protect Api from malicious people? Do I need to do anything else? Thank you in advance for your answers.
Best practice in these cases is to use .env files to keep data like API keys private.
You have to create a server for that which will perform OAuth and then send an API request to google.
You can get help about how to implement OAuth from this topic provided by google: https://developers.google.com/identity/protocols/oauth2/javascript-implicit-flow
If you send/attach your API key in frontend like javascript which is basically a frontend language then it can be used to:
Send fake requests which will use all of the bandwidth etc.
You should also consult the TOS.
On November 5th 2014 Google made some changes to the APIs terms of Service.
Like you I had an issue with the following line.
Asking developers to make reasonable efforts to keep their private
keys private and not embed them in open source projects.
That is however really only an issue if you are releasing the source code of your app as an Open source project for example.
If your just hosting this on a server then what you shoudl do is set up limitations for the api key adding_application_restrictions you can limit it so that the api key can only be used from your server and no where else.
i have a question.. is there a way to get the odata data without a cloud connector? so basicly like http/https://serverip:port/sap/opu/odata/sap/... ?
If i try it trough the browser it works.. i get my metadata.
my manifest looks like this
i tryed it also with http.. but it wont work cause the origin request is a https (webide). The Console says..
i try it already with proxy/ before the ip but almost the same.. the errormessage go away except of one the [ODataMetadata] initial loading of metadata failed - . I even dont get the login popup like before over the cloud connector.. so is there a way to do it like this? that i can develope in the webide? in someday i want that the appilcation i a standalone app. And dont need the scp
thanks guys.
If you are using SAP Web IDE, the easiest way to do it, would be to add a Destination for your backend system in SAP Cloud Platform, and then add an OData service using that Destination in SAP Web IDE (this will create entries in manifest.json and create wiring to the destination in neo-app.json). As your particular endpoint works through your browser, you wouldn't need to use Cloud Connector.
I have been through several of the amazon docs on this but I still can/t find a simple solution. I want to create a simple web page that allows users to upload images to my s3 bucket. Whenever I use an example I always get missing credentials as a response.
I also want to integrate a simple log on using the mobile phone with amazon's cognito. It is a single page application with no server back end. Happy to use Angular1 in the page.
I have the aws account set up but I am stuggling to find a simple example of how to do this. Does anyone have an example of how I might do this or where a tutorial that explains this very simply might be?
I am trying to make a request to the Youtube API for multiple video ids, and understand that this is referred to as 'batch processing'.
So far I have successfully made individual requests by appending the video id, request parameters and API key to the following url for the request:
https://www.googleapis.com/youtube/v3/videos?
becomes:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,statistics,player&id=ZYpxsJHVC-0&key=
Using this method I am able to retrieve data for multiple videos by comma separating multiple ids after 'id=' in the request url. I think that I am doing something similar to the video.list property mentioned in this post: YouTube API v3 batch processing and documented here: https://developers.google.com/youtube/v3/docs/videos/list#request
Is there some other batch processing method I should be using instead of adding 50 different video ids to the request url? Considering I am using Node, it seems like using Google's Node.js client library would make sense, but I couldn't find documentation on how to make specifically Youtube Data API requests.
The API Client Library for Javascript ( https://developers.google.com/api-client-library/javascript/features/batch#batch-request-promise ) actually sounds exactly like what I need because it supports promises, however the doc page doesn't mention how to download it. I was hoping to find some sort of 'npm install' command, but its not there. If this is the recommended method for going about retrieving video data for multiple videos, could someone point me in the right direction as far as implementing this library in my project? Thanks!
Google wants you to call their API Client Library for Javascript directly from their servers, as you can see in their authSample script:
<script src="https://apis.google.com/js/client.js?onload=handleClientLoad"></script>
Google has also released Node.js libraries that can be installed using npm. These installation instructions are for the Calendar API, but the package may also include the YouTube API. Please let us know!
I'm using MEANJS to do a node app.
Basically I have JSON stored in Mongo that I am using json-csv(NPM module) to get out to csv. I was able to get it to download (via a button) locally by doing a couple of tricks. But, when I uploaded it to azure it pooped on me. I rolled back everything and now I don't have the code to post here... but, it didn't really work anyway since I need it to run in azure.
If anyone had some guidance or pointers I would really appreciate it.
You can store your csv in azure blob and provide users link to the blob.
BTW, you may still need authentication so a proper solution is providing users a link belonging to your sites and when users click the link, validate auth then redirect to azure blob links.