React/Service worker unsure of reference - javascript

I am trying to set up a service worker for my react app using sw-precache ( https://github.com/GoogleChrome/sw-precache ) and am looking at some examples for reference. I of what a certain line of code means, and can't seem to find what it means googling. So looking at /!(*map*) in the context of -
module.exports = {
stripPrefix: 'build/',
staticFileGlobs: [
'build/*.html',
'build/manifest.json',
'build/static/**/!(*map*)' <-- here
],
dontCacheBustUrlsMatching: /\.\w{8}\./,
swFilePath: 'build/service-worker.js'
};
I am not sure what that does/means and am wondering if anyone could provide some clarity, I am not able to find a good reference googling. I am looking at https://github.com/jeffposnick/create-react-pwa/blob/c-r-pwa-0.6.0/sw-precache-config.js for reference. Thanks!

It just means that map files won't be included. So your map files for CSS and JS.
But actually it wouldn't include anything with the characters map in the filename or directory name.

Related

How to edit an object within a very simple JS file using Node.js

Whilst this question is related to Workbox and Webpack, it does not require any prior knowledge of either library.
Background (skip if not familiar with Workbox)
I am currently utilising the InjectManifest plugin from Workbox 4.3.1 (workbox-webpack-plugin). This version of the library offers an option called manifestTransforms, but unfortunately, the transformations are not applied to assets within the webpack compilation (this is a known issue).
Whilst this has been fixed in Workbox v5+, I am unable to upgrade due to another library in my build process requiring webpack v3 (Dynamic Importing in Laravel Mix)
The reason I mention the above is because unforunately the solution is not to upgrade to workbox v5+.
The Problem
I have an auto-generated file that looks like this:
self.__precacheManifest = (self.__precacheManifest || []).concat([
{
"revision": "68cd3870a6400d76a16c",
"url": "//css/app.css"
},
// etc...
]);
I need to somehow extract the the contents of the object stored within self.__precacheManifest, apply my own transformations, and then save it back to the file.
What I have Tried...
This is as far as I have got:
// As the precached filename is hashed, we need to read the
// directory in order to find the filename. Assuming there
// are no other files called `precache-manifest`, we can assume
// it is the first value in the filtered array. There is no
// need to test if [0] has a value because if it doesn't
// this needs to throw an error
let manifest = fs
.readdirSync(path.normalize(`${__dirname}/dist/js`))
.filter(filename => filename.startsWith('precache-manifest'))[0];
require('./dist/js/' + manifest);
// This does not fire because of thrown error...
console.log(self.__precacheManifest);
This throws the following error:
self is not defined
I understand why it is throwing the error, but I have no idea how I am going to get around this because I need to somehow read the contents of the file in order to extract the object. Can anyone advise me here?
Bear in mind that once I have applied the transformations to the object, I then need to save the updated object to the file...
Since self refers to window and window does not exist in node.js a way around is needed.
One thing that should work is to define the variable self in Node's global scope and let the require statement populate the content of the variable, like this:
global['self'] = {};
require('./dist/js/' + manifest);
console.log(self.__precacheManifest);
To save the modified contents back to the file
const newPrecacheManifest = JSON.stringify(updatedArray);
fs.writeFileSync('./dist/js/' + manifest, `self.__precacheManifest = (self.__precacheManifest || []).concat(${newPrecachedManifes});`, 'utf8');

Moving created files with JXA

I'm new to JXA scripting, but I'm attempting to troubleshoot some older scripts currently in place here at work. They loop through an InDesign document and create several PDFs based on it. Previously, they would be stored in a folder called "~/PDFExports". However, this doesn't work with 10.10.
If I change the code to just place the PDFs in "~/", it works fine. From there, I'd like to move the files to "~/PDFExports", but I can't seem to find an answer on how to do that. I've seen things about making calls to ObjC, or to call Application('Finder'), but neither work - they both return undefined.
Am I just missing something basic here, or is it really this hard to simply move a file with JXA?
EDIT: Some syntax for how I'm creating the folder in question and how I'm attempting to work with Finder.
//This is called in the Main function of the script, on first run.
var exportFolder = new Folder(exportPath);
if(!exportFolder.exists) {
exportFolder.create();
}
//This is called right after the PDF is created. file is a reference to the
actual PDF file, and destination is a file path string.
function MoveFile(file,destination){
var Finder = Application("Finder");
Application('Finder').move(sourceFile, { to: destinationFolder });
alert("File moved");
}
Adobe apps have long included their own embedded JS interpreter, JS API, and .jsx filename extension. It has nothing to do with JXA, and is not compatible with it.
InDesign's JSX documentation:
http://www.adobe.com/devnet/indesign/documentation.html#idscripting
(BTW, I'd also strongly advise against using JXA for Adobe app automation as it has a lot of missing/broken features and application compatibility problems, and really isn't fit for production work.)
Here's the link to Adobe's InDesign Scripting forum, which is the best place to get help with JSX:
https://forums.adobe.com/community/indesign/indesign_scripting
You could use Cocoa to create the folder
var exportFolder = $.NSHomeDirectory().stringByAppendingPathComponent("PDFExports")
var fileManager = $.NSFileManager.defaultManager
var folderExists = fileManager.fileExistsAtPath(exportFolder)
if (!folderExists) {
fileManager.createDirectoryAtPathWithIntermediateDirectoriesAttributesError(exportFolder, false, $(), $())
}
and to move a file
var success = fileManager.moveItemAtPathToPathError(sourceFile, destinationLocation, $());
if (success) alert("File moved");
Consider that destinationLocation must be the full path including the file name
and both sourceFile and destinationLocation must be NSString objects like exportFolder
Could it be that the folder is missing ? Could be your reference to the folder object not valid ? Any syntax to show ?
I will share some of what I learned about JXA move and duplicate methods. I am not a professional programmer just an attorney that is passionate about automation. My comments come from much trial and error, reading whatever I could find online, and A LOT of struggle. The move method does not work well with Finder. Use the System Events move method instead. The duplicate method in Finder works just fine. The duplicate method does not work well in system events. This is a modified snippet from a script I wrote showing move() using System Events.
(() => {
const strPathTargetFile = '/Users/bretfarve/Documents/MyFolderA/myFile.txt';
const strPathFolder = '/Users/bretfarve/Documents/MyFolderB/';
/* System Events Objects */
const SysEvents = Application('System Events');
const objPathFolder = SysEvents.aliases[strPathFolder];
SysEvents.move(SysEvents.aliases.byName(strPathTargetFile), {to: objPathFolder});
})();

How to Add Global/Public Variables to grunt-contrib-uglify Output

Okay, so I am way new to Grunt and Node.js. I am building a site, and decided that the 'main.js' file was getting way too big. So, I split it up, and I am now trying to use Grunt to piece all of these JS files back together.
The issue that I have is that I need to make some global variables available to all of the various functions in all of these JS files. To be more specific, every page on our site is identified via an id in the body tag:
<body id="home">
Many of these JS files contain if statements that ensure certain functions only run if the appropriate page is loaded. For example:
if (page == 'home') {
var title = "Home Page"
$('.page-title').text(title);
}
Notice the page variable? That guy is the one that I need to make available to all of these files (after grunt-contrib-uglify merges them together). So, I figured I'd assign a new "unique" variable name, and make it global.
I noticed that grunt-contrib-uglify has a 'wrap' option listed in its documentation. However, no examples are given as to how to use it.
Can anyone tell me:
- How to use the 'wrap' option in 'grunt-contrib-uglify'
- If this is the right grunt plugin for what I am trying to do?
One idea I had (as a last resort) is to create a before.js and after.js and put the beginning and end (respectively) of what I wish to wrap around the other files in each. But, I think the 'wrap' option is what I need, yes?
UPDATE: Here is a link to my "merged" JS file:
main.js
And a link to my Gruntfile:
Gruntfile.js
I have been having the same problem an searching for a solution. But I think I found an answer.
Use this in your gruntfile:
uglify: {
options: {
wrap: true
}
}
The documentation for the wrap property indicates that the variables will be made available in a global variable, and looking at the generated code that does seem to to be the case. Passing a string value to the parameter does seem to create a global variable with that name.
However, wrap: true seems to make all objects and properties available in the global scope. So instead of globals.page.title (which I can't get to work, anyway), you can just use page.title. Much, much easier and simpler.
If this suits your purposes, I'd recommend doing this instead.
Ok this one is tricky, I have been stucked for a while...
Way you do this with grunt-contrib-uglify for frontend JS
create multiple files like
SomeClass.js
OtherClass.js
main.js
and use some module (grunt-file-dependencies or grunt-contrib-concat) and setup it to concat your files. Then setup uglify in your Gruntfile.js like
...
uglify: {
options: {
wrap: "myPublicObject",
...
},
In file (main.js for example) exports variable has to be assigned, the entire file might look like this:
var otherClassInst = new OtherClass();
var someClassInst = new SomeClass();
exports = otherClassInst;
Now what it exactly does
Uglify will pick superior context (this) and define property on it named myPublicObject. Then it wrap your sourcecode with function and create exports variable here (DO NOT DECLARE var exports anywhere). At the end it will assign what you exported to that property. If you dont assign anything (you dont use exports =) inital value is {}, so the property with void object exists anyway.
To make this super-clear,
if you put your code into page like <script src="myminifiedfile.min.js"></script>, then superior context is window =>
window.myPublicObject is instance of OtherClass while window.someClassInst, window.SomeClass and window.OtherClass are undefined.
this is unlikely, but if you just copy content of minified result and wrap it with different function, object you exported will be visible only via this["myPublicObject"] => uglify wrap doesn't make things globaly accessible, it makes them accessible in superior context.

i18next best practice

I've successfully implemented i18next, which by the way is a great library! Though I'm still in search for the "best practice". This is the setup I have right now, which in general I like:
var userLanguage = 'en'; // set at runtime
i18n.init({
lng : userLanguage,
shortcutFunction : 'defaultValue',
fallbackLng : false,
load : 'unspecific',
resGetPath : 'locales/__lng__/__ns__.json'
});
In the DOM I do stuff like this:
<span data-i18n="demo.myFirstExample">My first example</span>
And in JS I do stuff like this:
return i18n.t('demo.mySecondExample', 'My second example');
This means I maintain the English translation within the code itself. I do however maintain other languages using separate translation.json files, using i18next-parser:
gulp.task('i18next', function()
{
gulp.src('app/**')
.pipe(i18next({
locales : ['nl','de'],
output : '../locales'
}))
.pipe(gulp.dest('locales'));
});
It all works great. The only problem is that when I've set 'en' as the userLanguage, i18next insists on fetching the /locales/en/translation.json file, even though it doesn't contain any translations. To prevent a 404, I currently serve an empty json object {} in that file.
Is there a way to prevent loading the empty .json file at all?
Maybe I'm missing something here but couldn't you simply do this:
if (userLanguage != 'en') {
i18n.init({
lng : userLanguage,
shortcutFunction : 'defaultValue',
fallbackLng : false,
load : 'unspecific',
resGetPath : 'locales/__lng__/__ns__.json'
});
}
That way your script i18n wouldn't be initialized unless you actually needed the translation service.
i18next-parser author here, I will explain how I use i18next and hopefully it will help:
1/ I do not use defaultTranslation in the code. The reason is that it doesn't belong in the code. I understand the benefit of having the actual text but the code can get bloated quickly. The difficult part consists in defining intelligible translation keys. If you do that, you don't really need the defaultTranslation text anymore. The translation keys are self-explainatory.
2/ If you have a 404 on the /locales/en/translation.json, then probably that you don't have the file in your public directory or something similar. With gulp you can have multiple destination and do dest('locales').dest('public/locales') for instance.
3/ If there is no translation in the catalog, make sure you run the gulp task first. Regarding populating the catalog with the defaultTranslation you have, it is a tricky problem to solve with regexes. Think of this case <div data-i18n="key">Default <div>translation</div></div>. It needs to be able to parse the inner html and extract all the content. I just never took the time to implement it as I don't use it.
See http://i18next.com/pages/doc_init.html under "whitelist languages to be allowed on init" (can't fragment link on those docs...):
i18n.init({ lngWhitelist: ['de-DE', 'de', 'fr'] });
Only specified languages will be allowed to load.
That should solve your problem. Though I suppose a blacklist would be even better.

how should I write my define to work with curl.js?

I'm reading Addy Osmani's excellent blog post about writing AMD modules. I start with a simple chunk of js that I lifted from his post:
define('modTest', [],
// module definition function
function () {
// return a value that defines the module export
// (i.e the functionality we want to expose for consumption)
// create your module here
var myModule = {
doStuff:function(){
console.log('Yay! Stuff');
}
}
return myModule;
}
);
I took out the dependencies on foo and bar. Just want a simple object that logs to the console.
So I save that in /js/modTest.js and then try to load it:
curl(['/js/modTest.js'])
.then(function(mt) {
console.log("Load complete");
console.log("mt:");
console.log(mt);
mt.doStuff()
}, function(ex) {alert(ex.message);})
Result: error: Multiple anonymous defines in URL. OK that didn't work. Tried adding in a namespace: define('myCompany/modTest', [],, same result. Tried adding an empty string in the dependency array, same result.
Also tried curl(['modTest.js'], function(dep){console.log(dep)}); with the same result.
Is the code in Addy's blog post incorrect? Am I doing something wrong? Maybe a bug in curl?
Update 5/24: I ditched curl.js in favor of require.js. Zero odd errors, very little work to change over. I did have to deal with amdefine a bit to get my code running client and server side (one object is in both places, so grunt had to be configured to take care of that). My defines generally look like:
define(->
class AlphaBravo
...
And never have any trouble loading.
You asked curl() to fetch a module called "/js/modTest.js". It found the file and loaded it and found a module named "modTest", so it complained. :) (That error message is horribly wrong, though!)
Here's how you can fix it (pick one):
1) Remove the ID from your define(). The ID is not recommended. It's typically only used by AMD build tools and when declaring modules inside test harnesses.
2) Refer to the module by the ID you gave it in the define(). (Again, the ID is not recommended in most cases.)
curl(['modTest'], doSomething);
3) Map a package (or a path) to the folder with your application's modules. It's not clear to me what that would be from your example since modTest appears to be a stand-alone module. However, if you were to decide to organize your app's files under an "app" package, you packages config might look like this:
packages: [ { name: 'app', location: 'app' } ]
Then, when you have code that relies on the modTest module, you can get to it via an ID of "app/modTest".
curl(['app/modTest'], doSomething);
I hope that helps clear things up!
Fwiw, Addy's example could actually work with the right configuration, but I don't see any configuration in that post (or my eyes missed it). Something like this might work:
packages: [ { name: 'app', location: '.' } ]
-- John
I've just had a similar problem which turned out to be the include order I was using for my other libraries. I'm loading handlebars.js, crossroads.js, jquery and a few other libraries into my project in the traditional way (script tags in head) and found that when I place the curl.js include first, I get this error, but when I include it last, I do not get this error.
My head tag now looks like this:
<script type="text/javascript" src="/js/lib/jquery.js"></script>
<script type="text/javascript" src="/js/lib/signals.js"></script>
<script type="text/javascript" src="/js/lib/crossroads.js"></script>
<script type="text/javascript" src="/js/lib/handlebars.js"></script>
<script type="text/javascript" src="/js/lib/curl.js"></script>
<script type="text/javascript" src="/js/main.js"></script>
You have a problem with your define call. It is NAMED
See AMD spec for full story on how to write defines, but here is what I would expect to see in your js/modTest.js file:
define(/* this is where the difference is */ function () {
// return a value that defines the module export
// (i.e the functionality we want to expose for consumption)
// create your module here
var myModule = {
doStuff:function(){
console.log('Yay! Stuff');
}
}
return myModule;
}
);
Now, the boring explanation:
CurlJS is awesome. In fact, after dealing with both, RequireJS and CurlJS, I would say CurlJS is awesome-er than RequireJS in one category - reliability of script execution ordering. So you are on the right track.
On of the major things that are different about CurlJS is that it uses "find at least one anonymous define per loaded module, else it's error" logic. RequireJS uses a timeout, where it effectively ignores cases where nothing was defined in a given file, but blows up on caught loading / parsing errors.
That difference is what is getting you here. CurlJS expects at least one anonymous (as in NOT-named) define per loaded module. It still handles named defines fine, as expected. The second you move the contents of "js/modTest.js" into inline code, you will have to "name" the define. But, that's another story.

Categories

Resources