Situation:
I have a sensitive website about domestic violence with an EXIT button that directly links to Google. So that anyone visiting that website can quickly jump to Google if the visitor feels unsafe or uncomfortable.
I would love to be able to clear any references to this website from bot the history list and the back button functionality. Basically, remove any proof of visiting that website. Keep in mind that not all people know how to browse anonymous and some people just cannot even get out of the house to browse the internet. Yes, this scenario is for seriously bad situations.
I've tried using location.replace instead of regular links to keep them from being saved into the history, but they just keep being saved in the history.
I've also tried to use browser.history.deleteUrl({url:"https://thewebsite"}), but this gives error on browser being undefined.
Is this even possible from a website? Or are there other options?
Thanks for thinking with me!
As you state in the question, you can use window.location.replace() to prevent your site from appearing in the window’s history (back button). Of course, this only works if your site had only one entry in the window’s history to begin with.
As you also state, there is a bigger problem: this does not prevent the site from appearing in the browser’s history. I believe you cannot solve this problem with scripts on your website: you need some external solution, like a browser extension.
(This does not really answer your question, but you could try using URLs and titles that disguise the nature of your site. I have heard of that being done with this sort of resource.)
In response to my idea of disguises, someone asked for examples and asked about discoverability. I was referring to the Aspire News App, featured on Dr Phil’s TV show. On that show, they made a big deal out of not showing what the app looked like, to avoid tipping off abusers. They also said the app is disguised as an ordinary app.
When I was researching this answer, I learned that disguises are indeed a terrible idea. I had no trouble finding information about the app online, and one review said the app is “pointless” because “with all of the media cpverage this app has gotten sbusers know exactly what it is and what to look for”.
I also learned that the app still had a fundamental security flaw 7 years after it was released. This shows that even supposedly reputable apps, dealing with sensitive matters, cannot be trusted. And perhaps it means that supposedly reputable websites looking to hide themselves from the browser’s history cannot be trusted either.
Related
A newbie friend of mine thought of attempting to "secure" the website and protect the assets through something like this. I explained why it's not going to work but it got me thinking if there are actual legitimate reasons to disable right clicking in the browsers.
Off the top of my head:
Security is obviously not one as any determined "attacker" can simply disable/override the javascript, inject their own javascript/use Chrome's Developer Tools as they control the client. At best, it'll stop non-technical users.
Attempts at protecting assets such as images obviously won't work either as they could simply save it using their browser complete with the images, use View Source and grab the assets from there, simply do a screenshot, among many other things. It'll only really stop the really non-technical and lazy users.
Preventing accidents such as say, running a transaction again by right click -> back would simply point to a deeper issue with the site code so would be a band-aid solution at best. I suppose one could make an argument that this is one use until the underlying site code is improved.
For some more desktop-app-like pages, I've sometimes used right click for actions specific to the application. For example, a custom context menu or modified drag/drop action. This provides what mouse users expect in a convenient way. (Touch users still need an alternative, however!)
This question already has an answer here:
Security modifing app state from javascript
(1 answer)
Closed 4 years ago.
I am a self-taught web developer and I've learned a lot throughout the years from more experienced developers, but there is one thing that is always bugging me...
The idea that any user can see and edit anything created using "front-end technologies", i.e. HTML, CSS, JavaScript.
I feel I am too paranoid about this, but wanted to hear from people that are more experienced and skilled than I am.
Obviously, I know I should secure the website so that all the imaginable actions from the user can be proofed, but I still can't help but wonder, is it enough?
I understand this is a general question and it's hard to answer, but consider the next situation.
I am building a website with substantial number of modals, or pop-ups.
An example would be a log in modal.
When a user clicks on the "log in" button, I display the log in modal and hide it once it is closed.
Now consider multiple of these modals being hidden from the user, but they still appear as hidden elements when the website is inspected.
A user could then display the modals while editing CSS which would cause issues if these modals are displayed where they shouldn't be displayed.
This is a crude example, but is this considered "bad practice/code structure"?
I am just very confused if this is completely insignificant since it isn't the "normal" functionality of the website, or if this is important and I should carefully structure what is shown in the inspection window of the browser.
Hopefully someone will shed some light on this issue.
Thank you
As long as the code visible to the user doesn't contain anything that should be secret and known only to the server, it's definitely not something to bother with. There are an uncountable number of ways for anyone to break a website by opening the console or developer tools and by deleting/moving elements or typing in Javascript of their own. Making a website impenetrable to this sort of tweaking would be impossible.
If the website breaks as a result of a user's own tampering, that's on them, not the site designer (and, worst case, they can just refresh the page to get back to the working-as-designed page). As long as nothing meant to be secret is sent to the client, feel free to build pages with the assumption that the end user will not run any custom Javascript or make any changes on their own. They may do so, of course, but as long as it doesn't allow them to do things that cause problems for other users or the server (like accepting unverified input on the backend, or sending something that should be secret to the client), it shouldn't be something to worry about.
It's my first HTML/javascript question which I raised on meta without results yet,
On Stack Overflow I reload my tags page which displays questions related to my selected tags.
If I want to go back to the previous results/page with a reload, then a question I wanted to check isn't possible because it's going back in history to a different page (or a blank page).
Can I go to the previous view? Is it impossible?
Sometimes questions are deleted or have changed tags so it's hard to find them.
Is it possible in javascript/HTML to view last results?
I've been in the same situation many times, wishing to see what I just saw but the internet has changed out from under me. (It actually happened while I was researching this answer...) Obligatory XKCD: https://xkcd.com/1309/
Let's talk a little about what this question is actually asking? The example given in this question is this very site, SO. But the purpose can apply to any website with dynamic information. As a user browses through information, an often used pattern is to link into more specific details. When the user moves back to the general list, it sometimes occurs that new information has been added, or old information removed, or just generally moved about. In those cases, it can be difficult for the user to navigate back to information they may have wanted to look at or they have looked at but now lost the link to. We ask what can be done to make this UX better both in the user habits and the website design.
I think this question truly does apply to both meta and non-meta, since there are many aspects: what can SO specifically do, as website, to improve their UX (the meta question); what could developers of sites do in general to improve UX in this regard; and what can user do to better utilize their technology or improve their workflow to improve their UX without special consideration from a website. Let's talk about what the user can do first.
What can a user do?
In some senses, this should be a simple task. All the browser would need to do is keep a record of the current DOM in it's cache when moving to a new page, then on back press, load the cached DOM as it stood when you last looked at it. But what do browsers actually do?
Browsers won't consistently help you right now by default. The answer by Stephan P gives us some insight. Essentially, going 'back' could result in a number of different things from loading the cached resources like I described above, to reloading some of the resources, to just refetching everything. And this is all decided by the arcane machinations of your browser. You have basically no say in what happens. (Except of course when you use one of the many methods to forcibly skip cache and reload all the recourses, but there doesn't seem to by any method to forcibly load from cache). So, no dice there.
The closest browser functionality I was able to find is the setting to load from cache when offline in Chrome. But this would require us to constantly switch between connecting and disconnecting from the internet while browsing. Which is not very helpful.
The above point actually just proves that this should be 100% possible,
if we could just look at our browser's cache of the page. Which we can! GolfWolf lets us know about the handy chrome://cache address that let's us view the complete contents of our browser's cache! Unfortunately, it is all in hex, so not super useful for browsing...
There are tools that will convert the hex dump to a viewable webpage as Senseful points us to. But I think, going into your cache, finding the old page, copying hex over to a translater, then viewing your page is not a very friendly workflow for regular browsing.
So, our browser is basically useless for what we are trying to accomplish. We only have a few options:
Write our own browser
Something very custom...
A. One potential would be to set up a proxy of sorts, that hard caches all the pages that went through it and returns from cache when you went back in your browser. Perhaps a simple browser extension button to click to send a message to the proxy to clear its cache?
B. While we're talking about crazy ideas... It could be possible to make a webpage that loads pages in iframes and attempts to intercept link clicks and instead of opening in the same iframe opens a new iframe to the new page and just hides the old iframe in its previous state, so when you go 'back' it just unhides your old page. But that is super cludgy and will break a lot of stuff with all those discontinuities. (and don't forget, not all links open a new page, there's a lot of ways this plan could go wrong).
Outside of actually solving this, what could we do to mitigate or work around our problems? ...Never leave a webpage... ever...
In all seriousness, this is my workflow: open the main page, open all links in new tabs (You can usually cmd/ctrl-click to quickly open a link in a new tab), keep all the tabs open until I've finished.
What can a website do?
Several things. There are many strategies that can be used to make information traversable, and they all have their own benefits and ramifications. This is not an exhaustive list:
The site can implement custom urls with parameters for everything that could affect rendering of information. Google search does this. In this way going back to a link (or just sharing a link to a different person) will result in loading the page in the same deterministic way as before. This can be used to great success, but also can result in inordinately long links.
The site can use cookies or local storage to keep track of where you are to restore your state when you return to a previous page.
The site can create its own history for you to browse. This is similar to what SO does now in other aspects, but taken to the extreme. Basically, record every action, every comment, every view, every post, absolutely everything. Then let the user browse through this history of things (obviously with lots of sorting and filtering options) to let them sift down to what they are looking for. i.e. "What was that post I was looking at yesterday? ...searching... Oh here it is!"
Each of these are huge topics on their own. Way outside scope of an answer here on SO.
TL;DR
A user can't do much of anything that isn't either a lot of work or breaks flow in one way or another, or both.
A website can do a lot of things to ease the user's pain. But there's no silver bullet. Only lots of work to make a good UX.
You can try to play with the ids to go back only with the browser, but if the question is deleted I think there's no way to catch its content if you don't use a web crawler software :/
https://stackoverflow.com/questions/48630484/
https://stackoverflow.com/questions/{ID}/ (it's an autoincremental id)
overflow community,
I've read several posts trying to solve the problem, but they dont answer my question.
Is there any legal way to find out what events (?) another site sends?
I dont ask because of illigal buissness and i am ready to find out more myself as soon as i know what i realy have to look for in terms of topic and methods.
In particular its about advertising and finding out if someone registered on another via a referal link. Like a sign that is sent as soon as the registration (on the other site which is not mine) is completed.
I want to find this out during the visit of the client on my site.
I just need to know if such a thing is legally possible and what JS topics i should give a go to find out more.
I hope my post is comprehensible enough. :)
edit: It's not about global variables.
You can use your browser's developer tools to see what's happening behind your back while you're visiting a web page (I recommend Firebug in Firefox). Alternatively, you may use a network spoofer like Wireshark to capture the traffic from the browser and analyze it in any way.
It's all up to you to find the information you think is relevant inside it in URLs, in request headers and bodies, etc. In your case, this would include script generated content and referal codes that may help keeping track of a user's browsing history across domains.
You don't really need Javascript knowledge to do this, but you need some basics about networking protocols.
This is probably as illegal as using a text editor, but just ask your lawyer if you're unsure :D
At my company most inventory tracking is done via an ASP.NET web application. The application is poorly conceived, poorly designed, poorly implemented, and somewhat of a hassle for a user to work with. These are things that are my opinion though and management has its own thoughts on the matter.
Such luxuries as the browser's back button and bookmarking pages are already not an option because of heaps and heaps of ancient Ajax code and now one of my bosses has the idea that he would prefer for the URL bar and browser buttons not to appear at all.
At first I told him that it was impossible but after thinking about it I suppose it could work if you used Javascript to create a fullscreen pop-up and run the application in that.
I personally am against this idea though since I'm the one who would do the work my own subconscious motivations are suspect so I'd like to gather some opinions on running an application in such a manner.
In addition, has anyone had any experience with transferring a regular webapp to such a setup? I'd like to know how much work could be in store for me.
Next time, for the good of the world, keep these kinds of ideas to yourself. It sounds like your boss is not qualified to make such a call, so make the call for him.
If your boss believes the url bar and browser buttons are not suppose to be there, then convert it to a stand alone app. Don't try to cram it into a web platform if its not suppose to be one.
You know the issues, so fight for what you think is right. Don't implement anything you are not going to be proud of.
You may find Prism intresting
Full Screen, no bars, just WebApp
I'd be tempted to simply add a button that allows you to pop out the app, without removing the normal mode.
If necessary, sell it with some waffle about users getting confused or not being able to reopen it or something. Or even pretend its not possible to do it without it.
That goes some way towards user friendliness. Salve your conscience anyway