i am using Gettext.js library to localize my contents generated from a JS file. Now the situation is, i have to create and write each and every po files manually. I know we can scan php files for gettext strings using PoEdit. So, is it possible to scan JS files for gettext strings using PoEdit?
Achieved this by creating a new parser of python language in PoEdit.
File > Preferences > Parsers > New
Language:
JS
List of extension:
*.js
Parser command:
xgettext --language=Python --force-po -o %o %C %K %F
Item in Keyword List:
-k%k
Item in input files list:
%f
Source code charset:
--from-code=%c
i found this tutorial while searching on this, which helped me to attain the situation Tutorial Here>>
Actually the tutorial is in French and the link is a google translated(to English) one.
Since version xgettext 0.18.3, you can use JavaScript as the language parameter.
This version of xgettext is used in Poedit since at least version 1.6.2.
The xgettext commandline program is used to scan source code and can parse the following languages:
C, C++, ObjectiveC, Shell, Python, Lisp, EmacsLisp, librep, Scheme, Java, C#, awk, Tcl, Perl, PHP, GCC-source, Glade
Although JavaScript is not listed as a language, I just tried it with a few and Perl actually worked. Try this:
echo " testFunc('foo');" > test.js;
xgettext --keyword=testFunc --output=- test.js --language="perl";
To do this from POEdit, open Preferences > Parsers > Perl add ;*.js to the file extensions list and add --language=Perl after xgettext in the Parser command field. This worked for me and I was able to get new strings from a JS file this way.
Although I don't know how gettext.js works a better approach may be to convert PO files to a native JavaScript file format.
xgettext now supports JavaScript natively, so the command is simply:
xgettext --output=output.pot --language=JavaScript *.js
Related
I have to decode BIN file downloaded from a company server to .txt or JSON File format using protocol buffer/compiler (which I have installed). I am using the following decode command in VS Code terminal for the purpose but do not able to deduce the error. I am an automotive systems engineer with no knowledge of coding language and have to decode the BIN file to complete my graduation thesis, so any help will be great full.
Windows Powershell error:
PS C:\Users\user\Desktop\MyPrograms> C:\protoc-3.14.0-win64\bin\protoc --decode=se.niradynamics.ncs.protobuf.output.RoadLayerTile roadlayertile.proto "< road_roughness_aggregation_23602633.bin >" output.txt
Could not make proto path relative: < road_roughness_aggregation_23602633.bin >: No such file or directory
It's just that it can't find the bin file, you have to search for it with the Windows file manager.
You should not have quotes around the pipe ( < and > ) operations, otherwise instead of redirecting stdin / stdout, you are looking for the literal file with > and < in the path, which isn't a thing that can exist. Take away the double quotes entirely.
To be honest though: using protoc here is very much doing this the hard way. If this was me, I'd run the .proto through either protoc or protogen to get a code model that represents the schema in my language of choice, and deserialize into that, then run that model through any JSON serializer.
I'm inputting markdown data and outputting HTML files with Pandoc. Using the --no-highlight flag, I can get the syntax to output without the built-in basic syntax highlighting, and use Prism.js to highlight the code instead, which is much more robust.
However, Prism requires that the code or pre have language-* in the class name. Using php as an example, Pandoc outputs <pre class="php">. I've managed to hack it to work by using:
```language-php
As the start of each code block. However, when I want to export the same code as an EPUB, it won't recognize the language to be able to use the built in syntax highlighting.
Here are the commands I use for EPUB and HTML output:
# epub output
pandoc assets/metadata.yaml chapters/*.md -o build/book.epub
# html output
pandoc assets/metadata.yaml chapters/*.md -s --toc --no-highlight --css ../assets/style.css -A assets/template/footer.html -o build/book.html
My issue:
I want to be able to write
```php
As the start of my code blocks, instead of
```language-php
So both Prism.js and the built-in syntax highlighter will work, with my EPUB and HTML generation.
If I could get Pandoc to interpret "```php" as class="language-php", this would solve the issue.
Here is a link on the Pandoc GitHub for someone else with the same issue I'm trying to solve.
I'm for using sed as well, but as a pre-processor. You could write a script like the one below, and name it pre-process:
#!/bin/bash -e
derived_dir=derived
rm -fr ${derived_dir} && mkdir -p ${derived_dir}
for file in $*
do
cat ${file} | sed 's/```php/```language-php/g' > ${derived_dir}/$(basename ${file})
done
echo "${derived_dir}/*"
Then you could use ```php in your source, and produce html via:
pandoc assets/metadata.yaml $(pre-process chapters/*.md) -s --toc --no-highlight --css ../assets/style.css -A assets/template/footer.html -o build/book.html
Hope this helps.
I am trying to minify a .js file that includes code like this:
DIACRITICS = {"\u24B6":"A","\uFF21":"A","\u00C0":"A","\u00C1":"A","\u00C2":"A","\u1EA6":"A","\u1EA4":"A","\u1EAA":"A","\u1EA8":"A","\u00C3":"A","\u0100":"A","\u0102":"A","\u1EB0":"A","\u1EAE":"A","\u1EB4":"A","\u1EB2":"A","\u0226":"A","\u01E0":"A","\u00C4":"A","\u01DE":"A","\u1EA2":"A","\u00C5":"A","\u01FA":"A","\u01CD":"A","\u0200":"A","\u0202":"A","\u1EA0":"A","\u1EAC":"A","\u1EB6":"A","\u1E00":"A","\u0104":"A","\u023A":"A","\u2C6F":"A","\uA732":"AA","\u00C6":"AE", ....
The problem is, when I use a tool like http://javascript-minifier.com/ or http://refresh-sf.com/ to minify it, the above code gets changed to this:
,j={"Ⓐ":"A","A":"A","À":"A","Á":"A","Â":"A","Ầ":"A","Ấ":"A","Ẫ":"A","Ẩ":"A","Ã":"A","Ā":"A","Ă":"A","Ằ":"A","Ắ":"A","Ẵ":"A","Ẳ":"A","Ȧ":"A","Ǡ":"A","Ä":"A","Ǟ":"A","Ả":"A","Å":"A","Ǻ":"A","Ǎ":"A","Ȁ":"A","Ȃ":"A","Ạ":"A","Ậ":"A","Ặ":"A","Ḁ":"A","Ą"
I assume that will cause problems when it executes? Is there any way around this?
Try using Microsoft's AjaxMinifier: http://ajaxmin.codeplex.com/
This is to do with encoding, so use the program with the "-enc:out ascii" command.
Once you download the program, open it. It will appear like a command prompt window. CD to the directory of your JS file, then run:
ajaxminifier file.js -o file.min.js -enc:out ascii
I'm trying to "build" my project using Closure Library; Sadly, after many tests I can't build something without problems. My project is a library, so I haven't got an entry point or anything like that, most of the code is made of objects and functions for the user.
I've got a project like that:
- build
- build.sh
- compiler.jar
- libs
- closure-library
- src
My build.sh file:
java -jar compiler.jar --compilation_level SIMPLE_OPTIMIZATIONS --js_output_file out.js `find ../src/ -name '*.js'`
With this command line I've got the error:goog.require('goog.vec.Vec2');; So I think I need to include google closure library in this line, right?
So, I've tried to change my build.sh to something like that (added closure-library folder):
java -jar compiler.jar --compilation_level SIMPLE_OPTIMIZATIONS --js_output_file out.js `find ../src/ ../libs/closure-library/closure/ -name '*.js'`
With this script, I've got many error from the closure library like that:
../libs/closure-library/closure/goog/i18n/datetimeformat_test.js:572: ERROR - This style of octal literal is not supported in strict mode.
var date = new Date(Date.UTC(2015, 11 - 1, 01, 11, 0, 1));
And my resulting file (out.js) haven't got all my library's functions. I'm not sure to understand where is the problem.
Did I really have to include google closure in the build part?
I'm working on a Javascript library so I haven't got an entry point and all the code must be included in the resulting file. So, how can I do that?
Thanks for your time!
Edit:
I've tried something different: removing all lines like "goog.require('goog.vec.Mat4');" from my library. The build is done successfully but my simulation didn't work anymore: Cannot read property 'Mat4' of undefined
The functionality you are looking for is documented on the GitHub wiki under Manage Closure Dependencies
Determining Dependencies
1. Your Library Source Contains goog.provide Statements
In this case you would use --only_closure_dependencies combined with --closure_entry_point flags. Your "entry" points are any class in your library from which you want to calculate dependencies. You can have multiple entry points.
2. Your Library Source Does Not Contain goog.provide Statements
Use the --manage_closure_dependencies flag. This instructs the compiler to include any JS file which does not contain a goog.provide statement in the output and to calculate all needed dependencies based on the goog.require statements in those files.
Providing Files To The Compiler
Closure-compiler's --js input flags can specify minimatch glob style patterns and this is the preferred method for providing Closure-library files as input. If you are using the --manage_closure_dependencies option, you must exclude the Closure-library test files.
Example:
java -jar compiler.jar --compilation_level=SIMPLE_OPTIMIZATIONS
--js_output_file=out.js
--manage_closure_dependencies
--js='../src/**.js'
--js='../libs/closure-library/**.js'
--js='!../libs/closure-library/**_test.js'
I'd like to know if there is a way to include a file in a coffee script.
Something like #include in C or require in PHP...
If you use coffeescript with node.js (e.g. when using the commandline tool coffee) then you can use node's require() function exactly as you would for a JS-file.
Say you want to include included-file.coffee in main.coffee:
In included-file.coffee: declare and export objects you want to export
someVar = ...
exports.someVar = someVar
In main.coffee you can then say:
someVar = require('included-file.coffee').someVar
This gives you clean modularization and avoids namespace conflicts when including external code.
How about coffeescript-concat?
coffeescript-concat is a utility that preprocesses and concatenates
CoffeeScript source files.
It makes it easy to keep your CoffeeScript code in separate units and
still run them easily. You can keep your source logically separated
without the frustration of putting it all together to run or embed in
a web page. Additionally, coffeescript-concat will give you a single
sourcefile that will easily compile to a single Javascript file.
Tl;DR: Browserify, possibly with a build tool like Grunt...
Solutions review
Build tool + import pre-processor
If what you want is a single JS file to be run in the browser, I recommend using a build tool like Grunt (or Gulp, or Cake, or Mimosa, or any other) to pre-process your Coffeescript, along with an include/require/import module that will concatenate included files into your compiled output, like one of these:
Browserify: probably the rising standard and my personal favourite, lets you to use Node's exports/require API in your code, then extracts and concatenates everything required into a browser includable file. Exists for Grunt, Gulp, Mimosa and probably most others . To this day I reckon it is probably the best solution if you're after compatibility both Node and the browser (and even otherwise)
Some Rails Sprocket-like solutions like grunt-sprockets-directives or gulp-include will also work in a consistent way with CSS pre-processors (though those generally have their own importing mechanisms)
Other solutions include grunt-includes or grunt-import
Standalone import pre-processor
If you'd rather avoid the extra-complexity of a build tool, you can use Browserify stand-alone, or alternatives not based on Node's require like coffeescript-concat or Coffee-Stir
[Not recommended] Asynchronous dynamic loading (AJAX + eval)
If you're writing exclusively for the browser and don't mind, or rather really want, your script being spread across several files fetched via AJAX, you can use a myriad of tools like:
yepnope.js or Modernizr's .load based on yepnope: Please note that yepnope is now deprecated by its maintainer, who recommend using build tools and concatenation instead of remote loading
RequireJS
HeadJS
jQuery's $.getScript
Vanilla AJAX + eval
your own implementation of AMD
You can try this library I made to solve this same problem coffee-stir
its very simple.
Just type #include and the name of the file that you want to include
#include MyBaseClass.coffee
For details
http://beastjavascript.github.io/Coffee-Stir/
I found that using "gulp-concat" to merge my coffee scripts before processing them did the trick. It can be easily installed to your project with npm.
npm install gulp-concat
Then edit your gulpfile.js:
var gulp = require('gulp')
,coffee = require('gulp-coffee')
,concat = require('gulp-concat');
gulp.task('coffee', function(){
gulp.src('src/*.coffee')
.pipe(concat('app.coffee')
.pipe(coffee({bare: true}).on('error', gulp.log))
.pipe(gulp.dest('build/')
})
This is the code I used to concatenate all my coffee scripts before gulp processed it into the final build Javascript. The only issue is the files are processed in alphabetical order. You can explicitly state which file to process to achieve your own file order, but you lose the flexibility of adding dynamic .coffee files.
gulp.src(['src/file3.coffee', 'src/file1.coffee', 'src/file2.coffee'])
.pipe(concat('app.coffee'))
.pipe(coffee({bare: true}).on('error', gulp.log))
.pipe(gulp.dest('build/')
gulp-concat as of February 25th, 2015 is available at this url.
Rails uses sprockets to do this, and this syntax has been adapted to https://www.npmjs.org/package/grunt-sprockets-directives. Works well for me.