LocalStorage not showing with build file - javascript

I have implemented a small localStorage with react, where I save URI endpoints once the users enters them, and I call them on my componentDidMount function if they exist.
The setup seemed super simple and it totally worked while I was doing npm start on my dev files, however on building my project and hosting it locally using 'serve', I am not able to see my localStorage anymore. Does this have to do something with the build files or the way I'm serving them?
componentDidMount() {
userUri = localStorage.getItem('userUri');
tracesUri = localStorage.getItem('tracesUri');
if (userUri && tracesUri) {
this.setState({
userUri: userUri,
tracesUri: tracesUri
});
}
};
closeModal = () => {
this.setState({
showSettings: false
});
localStorage.setItem('userUri', this.state.userUri);
localStorage.setItem('tracesUri', this.state.tracesUri);
};

If I understand your question correctly, when you run your app locally, you are not able to see the data that was persisted when you ran your app via npm?
Something to keep in mind is that data stored via localStorage is restricted to the current document.origin see MDN docs here.
You need to ensure that you are testing/running locally at the same origin for the same persisted data to be visible in both cases.
You can add this code to your app:
console.log('Origin is:', document.origin);
This will print the origin to console, and then cross check this origin by running the app both via 'npm' and by hosting locally to verify that the origin is the same or different

Access to data stored in the browser such as localStorage and IndexedDB are separated by origin. Each origin gets its own separate storage, and JavaScript in one origin cannot read from or write to the storage belonging to another origin.[ref: link]
So I guess while you serve, you must be using a different port due to which you were not able to access the previous localStorage values.

Related

How to prevent firebase db access from chrome console or other methods

So I have a single page frontend only app. Right now I have something like this
// db.js
import firebase from "firebase/app"
import "firebase/firestore";
var firebaseConfig = {
...
};
export const db = firebase
.initializeApp(firebaseConfig)
.firestore();
in main.js I was experimenting with putting the db instance in the global window scope just to see if I could go to the chrome web console and access it to submit a doc and indeed I can
// main.js
import { db } from './db'
window.db = db;
and then from chrome console
db.collection("test").add({'somekey': 'Can I add this doc?'})
How do I prevent someone from doing this without having a real backend to check auth? I like the reactivity of vue + firebase. If I don't expose the db variable to global scope is that enough? I was reading this post:
https://forum.vuejs.org/t/how-to-access-vue-from-chrome-console/3606/2
because any variable you create inside your main.js fiel will still not be globally available due to how webpack
One of the great things about Firestore is that you can access it directly from within your web page. That means that within that web page, you must have all configuration data to find the relevant Google servers, and find your Firebase project on those servers. In your example, that data is part of firebaseConfig.
Since you app needs this configuration, any malicious user can also get this data from your app. There is no way to hide this: if you app needs, a sufficiently motivated malicious user will be able to find it. And once someone has the configuration, they can use it to access your database.
The way to control access to the database, is by using Firebase's server-side security rules. Since these are enforced on the server, there is no way to bypass them, neither by your code, nor by the code that a malicious user writes.
You can use these security rules to ensure that all data is valid, for example making sure that all the required fields are there, and that there's no data that your app doesn't use.
But the common approach is to also ensure that all data access is authorized. This requires that your users are authenticated with Firebase Authentication. You can either require your users to sign in with their credentials, or you can anonymously sign them in. In the latter case they don't need to enter any credentials, but you can still ensure for example that each user can only write data to their own area of the data, and that they can only read their own data.

Force Service Worker to only return cached responses when specific page is open

I have built a portal which provides access to several features, including trouble ticket functionality.
The client has asked me to make trouble ticket functionality available offline. They want to be able to "check out" specific existing tickets while online, which are then accessible (view/edit) while the user's device is out-of-range of any internet connection. Also, they want the ability to create new tickets while offline. Then, when the connection is available, they will check in the changed/newly created tickets.
I have been tinkering with Service Workers and reviewing some good documentation on them, and I feel I have a basic understanding of how to cache the data.
However, since I only want to make the Ticketing portion of the portal available offline, I don't want the service worker caching or returning cached data when any other page of the portal is being accessed. All pages are in the same directory, so the service worker, once loaded, would by default intercept all requests from all pages in the portal.
How can I set up the service worker to only respond with cached data when the Tickets page is open?
Do I have to manually check the window.location value when fetch events occur? For example,
if (window.location == 'https://www.myurl.com/tickets')
{
// try to get the request from network. If successful, cache the result.
// If not successful, try returning the request from the cache.
}
else
{
// only try the network, and don't cache the result.
}
There are many supporting files that need to be loaded for the page (i.e. css files, js files, etc.) so it's not enough to simply check the request.url for the page name. Will 'window.location' be accessible in the service worker event, and is this a reasonable way to accomplish this?
Use service worker scoping
I know that you mentioned that you currently have all pages served from the same directory... but if you have any flexibility over your web app's URL structure at all, then the cleanest approach would be to serve your ticket functionality from URLs that begin with a unique path prefix (like /tickets/) and then host your service worker from /tickets/service-worker.js. The effort to reorganize your URLs may be worthwhile if it means being able to take advantage of the default service worker scoping and just not have to worry about pages outside of /tickets/ being controlled by a service worker.
Infer the referrer
There's information in this answer about determining what the referring window client URL is from within your service worker's fetch handler. You can combine that with an initial check in the fetch handler to see if it's a navigation request and use that to exit early.
const TICKETS = '/tickets';
self.addEventListener('fetch', event => {
const requestUrl = new URL(event.request.url);
if (event.request.mode === 'navigate' && requestUrl.pathname !== TICKETS) {
return;
}
const referrerUrl = ...; // See https://stackoverflow.com/questions/50045641
if (referrerUrl.pathname !== TICKETS) {
return;
}
// At this point, you know that it's either a navigation for /tickets,
// or a request for a subresource from /tickets.
});

How to use a same DB with 2 different React applications

I have to make an offline application that sync with another online app when it can.
I developed the offline app using PouchDB. I create this app thanks to the github repo create-react-app. This app is running at localhost:3000. In this app, I create and manage a little DB named "patientDB".
I manage the db with the classical put method like we can see in the documentation:
var db = new PouchDB('patientDB')
db.put({
_id: 'dave#gmail.com',
name: 'David',
age: 69
});
With the development tool for chrome given by PouchDB, I can see that the DB is working like I want (the documents are created):
The other application is another React application with a node server. During the development, this app is running at localhost:8080.
In this app I try to fetch all the docs contained in the "patientDB" with the following code:
const db = new PouchDB('patientDB', { skip_setup: true });
db.info()
.then(() => {
console.log("DBFOUND")
db.allDocs({include_docs: true})
.then(function (result) {
console.log("RESULT" , result)
}).catch(function (err) {
console.log("NOPE")
console.log(err);
});
})
My problem is that I can't get the "patientDB" created with the offline app in the online app. When I do a var db = new PouchDB ('patientDB') it create a new and empty db because it can't find a db which is already present.
I use google chrome to run all my application so I thought that the dbs could be shared.
However I did little and very simple tests with two html files:
First.html which initialize a new db with a doc
Second.html which read the db create in First.html
In this case, I can fetch the doc created with First.html in Second.hmtl even if the are two separated "website".
I think that the application which run at a localhost are like isolated of the rest of the application even if, like I said before, I use the same browser for all my applications...
I don't know what to do or I don't know if it's even possible to do what I want to do. If someone has an idea for me, I would be pleased.
EDIT
I can see why my DBs are not shared:
When I look at all my local DBs after running an html file I can see the following thing :
As we can see, the DBs come from the files _pouch_DB_NAME - file://
When I check my DB from the application running localy (localhost), I can see this :
The DB don't come from file but from localhost:8080
If you know how I can fetch doc from a local db in an app running in a server, it could be really helpful for me!
PouchDB is using IndexedDB in the browser, which adheres to a same-origin policy. MDN says this:
IndexedDB adheres to a same-origin policy. An origin is the domain, application layer protocol, and port of a URL of the document where the script is being executed. Each origin has its own associated set of databases. Every database has a name that identifies it within an origin.
So you have to replicate your local database to a central server in order to share the data. This could be a PouchDB Server together with your node app. You can also access PouchDB Server directly from the browser:
var db = new PouchDB('http://localhost:5984/patientDB')
As an alternative, you can use CouchDB or IBM Cloudant (which is basically hosted CouchDB).

Singe Page Application External Configurations (Not On NodeJS)

I'm looking for either a reference or an answer to what I think is a very common problem that people who are current implementing JavaScript MVC frameworks (such as Angular, Ember or Backbone) would come across.
I am looking for a way or common pattern to externalize application properties that are accessible in the JS realm. Something that would allow the javascript to load server side properties such as endpoints, salts, etc. that are external to the application root. The issue that I'm coming across is that browsers do not typically have access to the file systems because it is a security concerns.
Therefore, what is the recommended approach for loading properties that are configurable outside of a deployable artifact if such a thing exists?
If not, what is currently being used or is in practice that is considered the recommended approach for this types of problem?
I am looking for a cross compatible answer (Google Chrome is awesome, I agree).
Data Driven Local Storage Pattern
Just came up with that!!
The idea is to load the configuration properties based on a naming over convention configuration where all properties are derived from the targeted hostname. That is, the hostname will derive a trusted endpoint and that endpoint will load the corresponding properties to the application. These application properties will contain information that is relative at runtime. The runtime information will be supplied to the integration parts which then communicate via property iteration on the bootstrapping start up.
To keep it simple, we'll just use two properties here:
This implementation is Ember JS specific but the general idea should be portable
I am currently narrowing the scope of this question to a specific technological perspective, that is Ember JS with the following remedy that is working properly for me and hope it will help any of you out there dealing with the same issue.
Ember.Application.initializer implementation in start up
initialize: function (container, application) {
var origin = window.location.origin;
var host = window.location.hostname;
var port = window.location.port;
var configurationEndPoint = '';
//local mode
if(host === 'localhost'){
//standalone using api stub on NODEJS
if(port === '8000'){
configurationEndPoint = '/api/local';
}//standalone UI app integrating with back end application on same machine, different port
else{
configurationEndPoint = '/services/env';
}
origin += configurationEndPoint;
}else{
throw Error('Unsupported Environment!!');
}
//load the configuration from a trusted resource and store it in local storage on start up
$.get(origin,
function( data ) {
//load all configurations as key value pairs and store in localStorage for access.
configuration = data.configuration;
for(var config in configuration){
debugger;
var objectProperty = localStorage + '.' + config.toString()
objectProperty = configuration[config];
}
}
);
}
Configurable Adapter
export default DS.RESTAdapter.extend({
host: localStorage.host,
namespace: localStorage.namespace
});
No later than yesterday morning i was tackling the same issue.
Basically, you have two options:
Use localStorage/indexedDB or any other client-side persistent storage. (But you have to put config there somehow).
Render your main template (the one that gets rendered always) with a hidden where you put config JSON.
Then in your app init code you get this config and use it. Plain and simple in theory, but lets get down to nasty practice (for second option).
First, client should get config before application loads. It is not easy sometimes. e.g. user should be logged in to see config. In my case i check if i can provide config on the first request, and if not redirect user to login page. This leads us to second limitation. Once you are ready to provide config, you have to reboot app completely so that configuration code run again (at least in Angular it is necessary, as you cannot access providers after the app bootstraps).
Another constraint, the second option is useless if you use static html and cannot change it somehow on server before sending to the client.
May be a better option would be to combine both variants. This should solve some problems for returning users, but first interaction will not be very pleasant anyway. I have not tried this yet.

Google OAuth WildCard Domains

I am using the google auth but keep getting an origin mismatch. The project I am working has sub domains that are generated by the user. So for example there can be:
john.example.com
henry.example.com
larry.example.com
In my app settings I have one of my origins being http://*.example.com but I get an origin mismatch. Is there a way to solve this? Btw my code looks like this:
gapi.auth.authorize({
client_id : 'xxxxx.apps.googleusercontent.com',
scope : ['https://www.googleapis.com/auth/plus.me',
state: 'http://henry.example.com',
'https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/userinfo.profile'],
immediate : false
}, function(result) {
if (result != null) {
gapi.client.load('oath2', 'v2', function() {
console.log(gapi.client);
gapi.client.oauth2.userinfo.get().execute(function(resp) {
console.log(resp);
});
});
}
});
Hooray for useful yet unnecessary workarounds (thanks for complicating yourself into a corner Google)....
I was using Google Drive using the javascript api to open up the file picker, retrieve the file info/url and then download it using curl to my server. Once I finally realized that all my wildcard domains would have to be registered, I about had a stroke.
What I do now is the following (this is my use case, cater it to yours as you need to)
On the page that you are on, create an onclick event to open up a new window in a specific domain (https://googledrive.example.com/oauth/index.php?unique_token={some unique token}).
On the new popup I did all my google drive authentication, had a button to click which opened the file picker, then retrieved at least the metadata that I needed from the file. Then I stored the token (primary key), access_token, downloadurl and filename in my database (MySQL).
Back on step one's page, I created a setTimeout() loop that would run an ajax call every second with that same unique_token to check when it had been entered in the database. Once it finds it, I kill the loop and then retrieve the contents and do with them as I will (in this case I uploaded them through a separate upload script that uses curl to fetch the file).
This is obviously not the best method for handling this, but it's better than entering each and every subdomain into googles cloud console. I bet you can probably do this with googles server side oauth libraries they use, but my use case was a little complicated and I was cranky cause I was frustrated at the past 4 days I've spent on a silly little integration with google.
Wildcard origins are not supported, same for redirect URIs.
The fact that you can register a wildcard origin is a bug.
You can use the state parameter, but be very careful with that, make sure you don't create an open redirector (an endpoint that can redirect to any arbitrary URL).

Categories

Resources