Google maps API vs Library - javascript

What is the difference between the Google Places API and the Google Places Javascript Library?
The documentation for the Library doesn't talk about API keys, and as I was developing locally everything seemed to work just fine without one. But shortly after deploying the web app, it started throwing the following errors:
GET http://maps.googleapis.com/maps/api/js/AuthenticationService.Authenticate?1shttp%3A%2F%2Fopensaq.dev%2F&5e1&callback=_xdc_._obke7w&token=16494 403 (Forbidden)
GET https://maps.googleapis.com/maps/api/js/PlaceService.FindPlaces?1m6&1m2&1d4…SAQ&6sliquor_store&8e1&18m2&1b1&10u43536&callback=_xdc_._266p5k&token=1150 403 (Forbidden)
GET http://maps.googleapis.com/maps/api/js/QuotaService.RecordEvent?1shttp%3A%2F%2Fopensaq.dev%2F&4e1&5e0&6u1&7s2v1zkm&callback=_xdc_._dacmzn&token=54694 403 (Forbidden)
Did exceed a quota or something? What is each option intended for?

The library is a javascript API to make working with the API from JavaScript easier, as it has a set of objects that replace the direct API calls. It will call the API, so you have to conform to the requirements of the API limits. The library may also download related JS libraries that are needed to perform the operation too.
These rejections in theory could be the JS library relationships failing to download, but appear to be API rejections. This could be due to a limit, or a firewall restriction (if you have one or a proxy in place). Depending on the requirements of the API, some require a signature to be generated, and if that is missing, it will be forbidden too. Look in the documentation for what it returns 403; the API documentation usually spells it out.

It looks like I overlooked this: https://developers.google.com/maps/documentation/javascript/tutorial#api_key
It appears that I'm missing an API key.

Related

MSAL and passporrt-azure-ad cannot verify token

I have not been able to link the Azure Active Directory access control in our ReactJS application to our API, and reading through the documentation on this matter has left me with more questions than answers.
I first began by following the instructions in the AAD Single-page app scenario, which was straightforward and allowed me to log in to my AAD account on the web, which I mentioned previously. That worked well.
However, when it came to configuring our NodeJS API, I was presented with two options: A Web app that calls web APIs, and Protected web API.
In A Web app that calls web APIs, the registration was straightforward and led me to believe that this was the way to go, however the Code Configuration page states that only ASP, Java, and Python are supported. I was unable to find any JavaScript examples for this scenario, so I moved on to Protected Web API.
Similarly, while I found the registration portion to be easy to follow, the Code Configuration page only listed examples in .NET, NodeJS (but only for App Functions, rather than a standalone API), and Python. Given that the NodeJS example was close enough to a standalone API, I followed along with that code, substituting our configuration options where appropriate, seeing as it used the passport-azure-ad package that I saw elsewhere and previously had tried to implement. However, in both cases, every attempt to make an API call against the protected endpoint resulted in the API logging the following:
“authentication failed due to: In Strategy.prototype.jwtVerify: cannot verify token”
Additionally, and I’m not sure how related this could be, but I noticed that when I decoded the ID Token and Access Token on the ReactJS application, the ID Token version was 2.0, but the Access Token was 1.0. Searching through Github Issues and StackOverflow showed that others had observed this behavior as well, although I was unable to replicate their processes in order to get a v2.0 Access Token, which I suspect but am not sure is the reason for the inability for the NodeJS API to verify the token.
Oh, and I have observed the same behavior when using MSAL.js 1.3 as well as the 2.0 beta in the client, in case that helps.
The comments discussion helped me discover a solution that was otherwise not made obviously clear in the MSAL examples and documentation, which was that any MS Graph scopes in the Login Request (ex: "User.Read") would downgrade Access Tokens from v2.0 to v1.0, which is why I was receiving v2.0 ID Tokens with v1.0 Access Tokens.
I removed all Graph scopes from the Login Request, with the API scope being the only one remaining, and as a result the following login returned a v2.0 Access Token, which was subsequently validated by the API and enabled authenticated access.

Front-End API Token Exposure

My question is simple and general: when making calls to RESTFUL APIs, whether they be mine or external ones, is it common practice/ok to have the token exposed on the front end? For instance, in the documentation for the Google Maps api, they suggest the following code:
<script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap"
async defer></script>
the fact that your API key is exposed on the front end for all to see is ok then? I guess google has has the option to restrict access, so that can solve that, but what about other services that don't give that option?
Is it better to keep my API calls on the backend to protect my tokens? Having them on the backend, I would think, would not be preferred, because they I cannot get the data asynchronously
In short , yes it's possible to use someone else API keys in your site if you can spoof http referer header on client system .In fact you can even saturate his quota completely if you have either of these
Full control over your clients browser (to manipulate HTTP headers it might send to google servers)
You own huge collection of unique IP adresses (try only if you're freakingly rich and only motive you're left with in life is to empty his quota xD)
Idk atm will tell if i learn something more or someone comments something

Can I call the google maps directions API from a web page (as opposed to a server)?

I am a long time programmer (C, Python, FORTRAN), but this is my first foray into javascript and anything web, so I am learning on the fly.
So, the main question up front: Can I use the google maps directions API from a script section of a simple web page on my laptop, or does it need to be called from a server?
I have an API key and I have successfully used parts of the API that are called as functions (Map, Geometry). I am trying to use the google maps directions API, which as I understand it, you must use via a URL and an HTTP GET. Here is a sample URL that my code has constructed:
https://maps.googleapis.com/maps/api/directions/json?origin=45.0491174%2C-93.46037330000001&destination=45.48282401917292%2C-93.46037330000001&key="my key"
If I paste that URL into the address bar, it works. I get a document back with the directions info. If I execute it from inside a script section on a simple web page I am building, the response I get is:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.
I did some searching, both in stackoverflow and elsewhere on the web and I came across this:
http://www.html5rocks.com/en/tutorials/cors/
Per that page, I checked to make sure that withCredentials was supported and then I set withCredentials to true. This did not alter the outcome. Obviously, the API works, so I am now wondering if I have to do this from a web server and not from a simple web page to get around the cross-domain limitations. I am hoping to avoid having to set up a server since this is a one-off for my own personal use, but maybe I dont have a choice?
As an aside, does anyone have any insight into why the directions API is called via a URL rather than as a javascript function like many of the others?
For JavaScript better use the Web -> Maps JavaScript API. This helped me solve this issue without a server.
The problem is that your Web Services -> Directions API unlike e.g. the Web Services -> Geolocations API, does not provide JS XSS functionalities like server side access-control-allow-origin: * response headers or JSONP functionality. Maybe this is even a bug of Google because it seems very strange to me that one "Web Services" API server does allow JS XSS and another not.
See https://stackoverflow.com/a/26198407/3069376
To answer the main question, Yes. You can definitely use the GoogleMap Directions API inside your web page. To get you started quick and easy, follow this link . Then,
Click on the JAVASCRIPT + HTML version, copy the whole code and paste it into a text editor and save it as an html file.
Start your own local server (like node.js). Dont forget to obtain a Browser API key and set your HTTP refferer (example http://localhost:4567) in Google Developer Console or you will get errors.
Run your html file on your local server (example http://localhost:4567/myprojfolder/samplewebfile.html) .
You can do this with all Google Maps JavaScript API samples. If you're curious about setting a node.js server, there are plenty of resources online.

Upload from Client Browser to Google Cloud Storage Using JavaScript

I am using Google Cloud Storage. To upload to cloud storage I have looked at different methods. The method I find most common is that the file is sent to the server, and from there it is sent to Google Cloud storage.
I want to move the file directly from the user's web browser to Google Cloud Storage. I can't find any tutorials related to this. I have read through the Google API Client SDK for JavaScript.
Going through the Google API reference, it states that files can be transferred using a HTTP request. But I am confused about how to do it using the API client library for JavaScript.
People here would require to share some code. But I haven't written any code, I have failed in finding a method to do the job.
EDIT 1: Untested Sample Code
So I got really interested in this, and had a few minutes to throw some code together. I decided to build a tiny Express server to get the access token, but still do the upload from the client. I used fetch to do the upload instead of the client library.
I don't have a Google cloud account, and thus have not tested this, so I can't confirm that it works, but I can't see why it shouldn't. Code is on my GitHub here.
Please read through it and make the necessary changes before attempting to run it. Most notably, you need to specify the location of the private key file, as well as ensure that it's there, and you need to set the bucket name in index.html.
End of edit 1
Disclaimer: I've only ever used the Node.js Google client library for sending emails, but I think I have a basic grasp of Google's APIs.
In order to use any Google service, we need access tokens to verify our identity; however, since we are looking to allow any user to upload to our own Cloud Storage bucket, we do not need to go through the standard OAuth process.
Google provides what they call a service account, which is an account that we use to identify instances of our own apps accessing our own resources. Whereas in a standard OAuth process we'd need to identify our app to the service, have the user consent to using our app (and thus grant us permission), get an access token for that specific user, and then make requests to the service; with a service account, we can skip the user consent process, since we are, in a sense, our own user. Using a service account enables us to simply use our credentials generated from the Google API console to generate a JWT (JSON web token), which we then use to get an access token, which we use to make requests to the cloud storage service. See here for Google's guide on this process.
In the past, I've used packages like this one to generate JWT's, but I couldn't find any client libraries for encoding JWT's; mostly because they are generated almost exclusively on servers. However, I found this tutorial, which, at a cursory glance, seems sufficient enough to write our own encoding algorithm.
I'd like to point out here that opening an app to allow the public free access to your Google resources may prove detrimental to you or your organization in the future, as I'm sure you've considered. This is a major security risk, which is why all the tutorials you've seen so far have implemented two consecutive uploads.
If it were me, I would at least do the first part of the authentication process on my server: when the user is ready to upload, I would send a request to my server to generate the access token for Google services using my service account's credentials, and then I would send each user a new access token that my server generated. This way, I have an added layer of security between the outside world and my Google account, as the burden of the authentication lies with my server, and only the uploading gets done by the client.
Anyways, once we have the access token, we can utilize the CORS feature that Google provides to upload files to our bucket. This feature allows us to use standard XHR 2 requests to use Google's services, and is essentially designed to be used in place of the JavaScript client library. I would prefer to use the CORS feature over the client library only because I think it's a little more straightforward, and slightly more flexible in its implementation. (I haven't tested this, but I think fetch would work here just as well as XHR 2.).
From here, we'd need to get the file from the user, as well as any information we want from them regarding the file (read: file name), and then make a POST request to https://www.googleapis.com/upload/storage/v1/b/<BUCKET_NAME_HERE>/o (replacing with the name of your bucket, of course) with the access token added to the URL as per the Making authenticated requests section of the CORS feature page and whatever other parameters in the body/query string that you wish to include, as per the Cloud Storage API documentation on inserting an object. An API listing for the Cloud Storage service can be found here for reference.
As I've never done this before, and I don't have the ability to test this out, I don't have any sample code to include with my answer, but I hope that my post is clear enough that putting together the code should be relatively straightforward from here.
Just to set the record straight, I've always found OAuth to be pretty confusing, and have generally shied away from playing with it due to my fear of its unknowns. However, I think I've finally mastered it, especially after this post, so I can't wait to get a free hour to play around with it.
Please let me know if anything I said is not clear or coherent.

Server-side flow for Google Drive API authorization of a javascript Chrome extension

I was reading #Nivco answer to Authorization of Google Drive using JavaScript and saw:
"...all you have to do it is use server-side code to process the authorization code returned after the Drive server-side flow (you need to exchange it for an access token and a refresh token). That way, only on the first flow will the user be prompted for authorization. After the first time you exchange the authorization code, the auth page will be bypassed automatically.
Server side samples to do this is available in our documentation."
Having read the documentation I am still pretty confused about how to process the authorization code and ultimately pass the access and refresh tokens to my Chrome extension so that it can proceed without the server for future requests. Can someone provide an example of the server-side code to do this?
As background I have a Chrome Extension with several thousand users that is built on the Google DocList API but I am trying to transition to the Drive API since the other one is being deprecated. Ideally my code would be entirely stand alone as an extension but I'm willing to accept the single authorization request through my server that Nivco's answer requires.
Thanks!
We've just ported our JavaScript application from using server to client flow. We've removed the server part entirely, it's not needed any longer.
You can see the source code that we used online, it's available uncompressed.

Categories

Resources