A component of an extension that I've been developing requires the extension to save files and then (at a later time) open those files again. As far as I know, this can be done as long as the downloads directory is known. Saving is easy, you just use:
chrome.downloads.download({
url: fileData,
filename: "/AppGeneratedSubFolder/"+fileName,
conflictAction:'overwrite',
saveAs:false
}
And I can perform the opening process if I have included the correct permission in the manifest and have the "Allow access to file URLs" box checked by the user.
I can then access the file by using the following:
var missing = "userInputRightNow"
var open= "file:///"+missing+"/AppGeneratedSubFolder/"+fileName;
chrome.windows.create({
url:open,
type:'popup',
state: 'maximized',
focused:true,
});
However, this requires me to somehow get the content for the variable labeled missing. I have done this by directing users to the following page (i.e., opening it in a new window) and telling them to copy the directory into an input box within my application--I then save the value for future use by the application:
chrome://settings/?search=Ask%20where%20to%20save%20each%20file%20before%20downloading
However, I'm wondering if there is any better approach. I broadly understand the design decision to not expose a user's file system to extension APIs. However, I'm wondering if I'm missing some work around:
Is there another permission that might enable this that I haven't considered?
I don't actually have any reason to "know" the missing information. I would be satisfied with a way to reference the relative path without knowing it.
Or some superior method to obtaining that missing information as trying to walk users through the process of copying and pasting it (no matter how clear) seems to be a major bottle neck in my installation process.
Related
Currently my Office is running a AHK script to pull environment variables. These Env Variables are then used as a specific outputted data when closing tickets as my Office has a ticket closing environment. This works for the time being however I am looking into automating this process and starting off just trying to auto close the tickets when a specific key is pressed. I have been able to perform this task but I have to basically have static variables in the TamperMonkey script for each user. Everyone using this ticket site has the specific environment variables already due to the AHK script and want to try and implement this into the Tampermonkey script without having to change the site completely.
I have locally hosted the site and used Node to do this and I am successful in doing this but it does not work on the Tampermonkey route. I have been using process.env.ENV_VARIABLE on the node side but I am trying to refrain from completely implementing this on the site itself. I have added some basic variable examples in a Autohotkey Script already being used.
GetGreeting() {
global greeting
return greeting
}
GetSalutation() {
global salutation
return salutation
}
GetUserName() {
Envget, e_Ticketname, Ticketuser
return e_Ticketname
}
When a specific Key is pressed it should write the specific message and include said specific Env Variables. Currently I don't think I have it where Tampermonkey can actually understand the Environment Variables as it keeps giving a undefined error. Any Ideas.
So upon further investigation it does not appear to be a way to interact with the OS inside the browser. I will be looking into another way to do what I am looking for. Thank you!
You are able to access and run local files as JavaScript code in Tampermonkey using \\ #require
So if you're able to have a local file with the content in this format:
variables = {
var1: "hello there"
}
Then in the script, add this line and add the path to the file.
// #require file://Path\to\file
Since all the file has is an assignment to a variable, then you can access that in the script
console.log(variables.var1) // logs "hello there"
For this you need to give the extension access to file URLs:
go to chrome://extensions
Tampermonkey > Details
Allow access to file URLs
You'll still need a way to generate the file in that format in the user machine though, either manually, or some code running locally.
A way to generate the file, could be using Node locally, but if you're running that locally, at that point another way to get local data is to serve it using a simple http server (like Node server), then you could make a request from Tampermonkey using fetch or GM_xmlhttpRequest
As a side note, a hack I use to run code locally triggered with Tampermonkey, is to use the localexplorer extension. The extension allows you to open files and folder from the browser using "localexplorer://" urls, so then with javascript you can do window.open(local_url) and it will run or open that file/folder. The file can also be a .bat file, and you can run anything from it (including Node code).
There are some security considerations for using this though if you're worried other websites might be able to open files in your system. but the extension prompts you every time you try to open something with localexplorer
If you're still interested on this, a way I use for it to work without the prompt with less risk is this:
The prompt also lets you click the checkbox of Always open links of this type in the associated app for each domain. So then what you can do, is have a specific domain you choose for this, to always use that domain to open localexplorer links, and use a format of your choosing, like secretdomain.com/?C:\\path\\to\\file, and grant access to always open the links on that domain. Then use Tampermonkey to run some code on that domain so that when it detects that specific url format, to redirect the page to a localexplorer url, like this
location.href.replace(/htt.*:\/\/secretdomain.com\/\?/,'localexplorer:')
I'm creating an ASP.Net form with a fileupload control which will then email the details of the form and the file to another admin. I want to ensure this secure (for the server and the recipient). The attachment should be a CV so I will restrict it to typical text documents.
From what I can tell the best bet is to check that the file extension or MIME Type is of that kind and check it against the "magic numbers" to verify that the extension hasn't been changed. I'm not too concerned about how to go about doing that but want to know if that really is enough.
I'd also be happy to use a third party product that takes care of this and I've looked at a couple:
blueimp jQuery file upload
http://blueimp.github.io/jQuery-File-Upload/
and cutesoft ajaxuploader
http://ajaxuploader.com/Demo/
But blueimp one still seems to require custom server validation (i guess just being jQuery it just handles client-side validation) and the .net one checks the MIME-type matches the extension but I thought the MIME type followed the extension anyway.
So,
Do I need to worry about server security when the file is added as an attachment but not saved?
Is there a plugin or control that takes care of this well?
If I need to implement something for server validation myself is matching the MIME-type to the "magic numbers" good enough?
I'm sure nothing is 100% bulletproof but file upload is pretty common stuff and I assume most implementations are "safe enough" - but how!?
If it's relevant, here is my basic code so far
<p>Please attach your CV here</p>
<asp:FileUpload ID="fileUploader" runat="server" />
and on submit
MailMessage message = new MailMessage();
if (fileUploader.HasFile)
{
try
{
if (fileUploader.PostedFile.ContentType == "text")
{
// check magic numbers indicate same content type... if(){}
if (fileUploader.PostedFile.ContentLength < 102400)
{
string fileName = System.IO.Path.GetFileName(fileUploader.PostedFile.FileName);
message.Attachments.Add(new Attachment(fileUploader.PostedFile.InputStream, fileName));
}
else
{
// show a message saying the file is too large
}
}
else
{
// show a message saying the file is not a text based document
}
}
catch (Exception ex)
{
// display ex.Message;
}
}
A server can never be 100% secure, but we should do our best to minimize the risk on an incident. I should say at this point that I am not an expert, I am just a computer science student. So, here is an approach that I would follow in such a case. Please, comment any additional tip you can give.
Generally speaking, to have a secure form, all client inputs must be checked and validated. Any information that does not origin from our system is not trusted.
Inputs from the client in our case:
file's name
name
extension
file's content
Extension
We don't really care about the minetype, this is info for a web server. We care about the file extension, because this is the indicator for the OS on how to run/read/open a file. We have to support only specific file extensions (what ever your admin's pc can handle) there is no point supporting unknown file types.
Name (without the extension)
The name of the file is not always a valuable info. When I deal with file uploading I usually rename it (set it) to an id (a username, a time-stamp, hashes etc). If the name is important, always check/trim it, if you only expect letters or numbers delete all other chars (I avoid to leave "/", "\", "." because they can be used to inject paths).
So now we suppose that the generated file name is safe.
Content
When you support no structured files, you just can not validate the file's content. Thus, let an expert program do this for you... scan them with an antivirus. Call the antivirus from the console (carefully, use mechanics that avoid injections). Many antivirus can scan zips contents too (a malicious file, in a folder on your server is not a good idea). Always keep the scan program updated.
On the comments I suggested zipping the file, in order to avoid any automatic execution on the admin's machine and on the sever. The admin's machine's antivirus can then handle it before unzip.
Some more tips, don't give more information's to the client than he needs... don't let the client know where the files are saved, don't let the web-server access them for distribution if there no need to. Keep a log with weird actions (slashes in filenames, too big files, too long names, warning extensions like "sh" "exe" "bat") and report the admins with an email if anything weird happen (it is good to know if your protections work).
All these creates server work load (more system holes), so you may should count the number of files that are scanned/checked at the moment before accepting a new file upload request (that is where I would launch a DDoS attack).
With a quick google search Avast! For Linux - Command Line Guide, I do not promote Avast, I am just showing it as an existing example.
Lastly but not least, you are not paranoid, I manage a custom translation system that I coded... spams and hack attacks have occurred more than once.
Some more thoughts, JavaScript running on a web-page is only secure for the client's computer (thanks to the browser's security). We can use it to prevent invalid posts to the server but this does not ensures that such requests will not be done as JavaScript can be bypassed/edited.
So, all JavaScript solutions are only for a first validation (usually just to help the user correct mistakes) and to correctly set the form data.
I am using sw-precache along with sw-toolbox to allow offline browsing of cached pages of an Angular app.
The app is served through a node express server.
One problem we ran into is that the index.html sometimes doesn't seem to be updated in the cache although other assets has been updated on activation of new service worker.
This leaves users with an outdated index.html that is trying to load no longer existing versioned asset in this case /scripts/a387fbeb.modules.js.
I am not entirely sure what's happening, because it seems that on different browsers where the index.html has been correctly updated have the same hash.
On one browser outdated (problematic) Index.html
(cached with 2cdd5371d1201f857054a716570c1564 hash) includes:
<script src="scripts/a387fbeb.modules.js"></script>
in its content. (this file no longer exists in the cache or on remote).
On another browser updated (good) index.html
(cached with the same 2cdd5371d1201f857054a716570c1564) includes:
<script src="scripts/cec2b711.modules.js"></script>
These two have the same cache, although the content that is returned to the browsers are different!
What should I make of this? Does this mean that sw-precache doesn't guarantee atomic cache busting when new SW activates? How can one protect from this?
If these help, this is the generated service-worker.js file from sw-precache.
Note: I realize I can use remoteFirst strategy (at least for index.html) to avoid this. But I'd still like to understand and figure out a way to use cacheFirst strategy to get the most out of performance.
Note 2: I saw in other related questions that one can change the name of the cache to force bust all the old cache. But this seems to beat the idea of sw-precache only busting updated content? Is this the way to go?
Note 3: Note that even if I hard reload the browser where the website is broken. The site would work because it would skip service worker cache but the cache would still be wrong - the service worker doesn't seem to activate - my guess because this specific SW has been activated already but failed at busting the cache correctly. Subsequent non-hard-refresh visits would still see the broken index.html.
(The answers here are specific to the sw-precache library. The details don't apply to service workers in general, but the concepts about cache maintenance may still apply to a wider audience.)
If the content of index.html is dynamically generated by a server and depends on other resources that are either inlined or referenced via <script> or <link> tags, then you need to specify those dependencies via the dynamicUrlToDependencies option. Here's an example from the app-shell-demo that ships as part of the library:
dynamicUrlToDependencies: {
'/shell': [
...glob.sync(`${BUILD_DIR}/rev/js/**/*.js`),
...glob.sync(`${BUILD_DIR}/rev/styles/all*.css`),
`${SRC_DIR}/views/index.handlebars`
]
}
(/shell is used there instead of /index.html, since that's the URL used for accessing the cached App Shell.)
This configuration tells sw-precache that any time any of the local files that match those patterns change, the cache entry for the dynamic page should be updated.
If your index.html isn't being generated dynamically by the server, but instead is updated during build time using something like this approach, then it's important to make sure that the step in your build process that runs sw-precache happens after all the other modifications and replacements have taken place. This means using something like run-sequence to ensure that the service worker generation isn't run in parallel with other tasks.
If the above information doesn't help you, feel free to file a bug with more details, including your site's URL.
Is there a way to force the clients of a webpage to reload the cache (i.e. images, javascript, etc) after a server has been pushed an update to the code base? We get a lot of help desk calls asking why certain functionality no longer works. A simple hard refresh fixes the problems as it downloads the newly updated javascript file.
For specifics we are using Glassfish 3.x. and JSF 2.1.x. This would apply to more than just JSF of course.
To describe what behavior I hope is possible:
Website A has two images and two javascript files. A user visits the site and the 4 files get cached. As far as I'm concerned, no need to "re-download" said files unless user specifically forces a "hard" refresh or clears their cache. Once a site is pushed an update to one of the files, the server could have some sort of metadata in the header informing the client of said update. If the client chooses, the new files would be downloaded.
What I don't want to do is put meta-tag in the header of a page to force nothing from ever being cached...I just want something that tells the client an update has occurred and it should get the latest once something has been updated. I suppose this would just be some sort of versioning on the client side.
Thanks for your time!
The correct way to handle this is with changing the URL convention for your resources. For example, we have it as:
/resources/js/fileName.js
To get the browser to still cache the file, but do it the proper way with versioning, is by adding something to the URL. Adding a value to the querystring doesn't allow caching, so the place to put it is after /resources/.
A reference for querystring caching: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.9
So for example, your URLs would look like:
/resources/1234/js/fileName.js
So what you could do is use the project's version number (or some value in a properties/config file that you manually change when you want cached files to be reloaded) since this number should change only when the project is modified. So your URL could look like:
/resources/cacheholder${project.version}/js/fileName.js
That should be easy enough.
The problem now is with mapping the URL, since that value in the middle is dynamic. The way we overcame that is with a URL rewriting module that allowed us to filter URLs before they got to our application. The rewrite watched for URLs that looked like:
/resources/cacheholder______/whatever
And removed the cacheholder_______/ part. After the rewrite, it looked like a normal request, and the server would respond with the correct file, without any other specific mapping/logic...the point is that the browser thought it was a new file (even though it really wasn't), so it requested it, and the server figures it out and serves the correct file (even though it's a "weird" URL).
Of course, another option is to add this dynamic string to the filename itself, and then use the rewrite tool to remove it. Either way, the same thing is done - targeting a string of text during rewrite, and removing it. This allows you to fool the browser, but not the server :)
UPDATE:
An alternative that I really like is to set the filename based on the contents, and cache that. For example, that could be done with a hash. Of course, this type of thing isn't something you'd manually do and save to your project (hopefully); it's something your application/framework should handle. For example, in Grails, there's a plugin that "hashes and caches" resources, so that the following occurs:
Every resource is checked
A new file (or mapping to this file) is created, with a name that is the hash of its contents
When adding <script>/<link> tags to your page, the hashed name is used
When the hash-named file is requested, it serves the original resource
The hash-named file is cached "forever"
What's cool about this setup is that you don't have to worry about caching correctly - just set the files to cache forever, and the hashing should take care of files/mappings being available based on content. It also provides the ability for rollbacks/undos to already be cached and loaded quickly.
i use a no-cache parameter for this situations...
a have a string constant value like (from config file)
$no_cache = "v11";
and in pages, i use assets like
<img src="a.jpg?nc=$no_cache">
and when i update my code, just change the $no_cache value, and it works like a charm.
If this has been asked before, I apologize but this is kinda of a hard question to search for. This is the first time I have come across this in all my years of web development, so I'm pretty curious.
I am editing some HTML files for a website, and I have noticed that in the src attribute of the script tags that the previous author appended a question mark followed by data.
Ex: <script src="./js/somefile.js?version=3.2"></script>
I know that this is used in some languages for value passing in GET request, such as PHP, but as I far as I ever knew, this wasn't done in javascript - at least in calling a javascript file. Does anyone know what this does, if anything?
EDIT: Wow, a lot of responses. Thanks one and all. And since a lot of people are saying similar things, I will post an global update instead of commenting everyone.
In this case the javascript files are static, hence my curiosity. I have also opened them up and did not see anything attempt to access variables on file load. I've never thought about caching or plain version control, both which seam more likely in this circumstance.
I believe what the author was doing was ensuring that if he creates version 3.3 of his script he can change the version= in the url of the script to ensure that users download the new file instead of running off of the old script cached in their browser.
So in this case it is part of the caching strategy.
My guess is it's so if he publishes a new version of the JavaScript file, he can bump the version in the HTML documents. This will not do anything server-side when requested, but it causes the browser to treat it as a different file, effectively forcing the browser to re-fetch the script and bypass the local cache of the file.
This way, you can set a really high cache time (like a week or a month!) but not sacrifice the ability to update scripts frequently if necessary.
What you have to remember is that this ./js/somefile.js?version=3.2 doesn't have to be a physical file. It can be a page which creates the file on the fly. So you could have it where the request says, "Hey give me version 3 of this js file," and the server side code creates it and writes it to the output stream.
The other option is to force the browser to not cache the file and pull down the new one when it makes the request. Since the URI changed, it will think the file is completely new.
A (well-configured) web server will send static files like JavaScript source code once and tell the web browser to cache that file locally for a certain period of time (could be a day, a week, a month, or longer). When the browser sees another request for that same file, it will just use that version instead of getting new code from the server.
If the URL changes -- for example by adding a query string -- then the browser suspects that its cached version is no good and gets a new one. As such, the ? helps developers say "Oops, I changed this file, make sure the browser gets a new copy."
In this case it's probably being used to ensure the source file isn't cached between versions.
Of course, it could also be used server side to generate the javascript file, without knowing what you have on the other end of the request, it's difficult to be definitive.
BTW, the ?... portion of the url is called the query string.
this is used to guarantee that the browser downloads a new version of the script when available. The version number in the url is incremented each time a new version is deployed so that the browser see it as a different file.
Just because the file extension is .js doesn't mean that the target is an actual .js file. They could set up their web server to pass the requested URL to a script (or literally have a script named somefile.js) and have that interpret the filename and version.
The query string has nothing to do with the javascript. Some server side code is hosting up a different version depending on that querystring it appears.
You should never assume anything about paths in a URL. The extension on a path in a URL doesn't really tell you anything. URLs can be completely dynamic and served by some server side code or can rewritten in web servers dynamically.
Now it is common to add a querystring to urls when loading javascript files to prevent client side caching. If the page updates and references a new version of the script then the page can bust through and cause the client to refresh it's script.