I am currently trying to learn how to use Angular and I was tasked with rewriting an existing functional page to use Angular.
One of our core functionalities lies with translating strings with a JavaScript function, however, this translate method is only supposed to be called after the translations strings are loaded into the script, and that loading is done asynchronously.
We currently use callbacks to achieve this behavior, but due to Angular nature (at least in my understanding) my data is displayed as soon as the page is rendered.
I have the to be translated strings, but I wanted to display the translated strings in my table, and for that I tried creating a filter "translate" that would call my existing translate method and return its value. What I believe it's happening is that at the moment the filter gets called, I still don't have the translation strings yet, as they may not be loaded already.
What I expected was to something like {{ object.name | translate }}, where object.name is a to be translated string like "access.granted", to give me the corresponding translation string for a given language, for instance "You were given permission to access this area.".
I am not positive this is the best way to achieve what I want, I haven't had much time to delve deep into angular yet, but one requirement I have is, I cannot change the way translations are dealt with, since we have a huge system already using that code, and the new pages using angular must be fully compatible.
What I have tried: (my current filter code)
function ngTranslate (text, backoff) {
if (backoff != undefined)
backoff = 100;
if (backoff > 5000)
backoff = 5000;
if (internationalization != undefined && Object.keys(internationalization.messages).length > 0)
return translate(text);
setTimeout(function () {
return ngTranslate(text, backoff * 2);
}, backoff);
}
app.filter("translate", function ($log) {
return function (key) {
$log.info(key);
return ngTranslate(key);
};
});
What I tried to achieve here was verify if the translations strings are already loaded, and if not, try again in a short time, but even after the translation strings loaded, nothing showed on the table.
Here is my table code:
<tr data-ng-repeat="product in productList">
<td>{{ product.name | translate }}</td>
</tr>
Thanks a lot in advance
well, one of the ways i can think of, is to bootstrap your module, load the strings and then manually start the angular app in the callback function
Use this function to manually start up angular application
https://docs.angularjs.org/api/ng/function/angular.bootstrap
That way you can insure all the translation strings will be loaded before you initialize your app.
Related
What is the name of the native function that handles template literals?
That is, I know that when you write tag`Foo ${'bar'}.`;, that’s just syntactic sugar for tag(['Foo ', '.'], 'bar');.¹
But what about just `Foo ${'bar'}.`;? I can’t just “call” (['Foo ', '.'], 'bar');. If I already have arguments in that form, what function should I pass them to?
I am only interested in the native function that implements the template literal functionality. I am quite capable of rolling my own, but the purpose of this question is to avoid that and do it “properly”—even if my implementation is a perfect match of current native functionality, the native functionality can change and I want my usage to still match. So answers to this question should take on one of the following forms:
The name of the native function to use, ideally with links to and/or quotes from documentation of it.
Links to and/or quotes from the spec that defines precisely what the implementation of this function is, so that if I roll my own at least I can be sure it’s up to the (current) specifications.
A backed-up statement that the native implementation is unavailable and unspecified. Ideally this is backed up by, again, links to and/or quotes from documentation, but if that’s unavailable, I’ll accept other sources or argumentation that backs this claim up.
Actually, the first argument needs a raw property, since it’s a TemplateStringsArray rather than a regular array, but I’m skipping that here for the sake of making the example more readable.
Motivation
I am trying to create a tag function (tag, say) that, internally, performs the default template literal concatenation on the input. That is, I am taking the TemplateStringsArray and the remaining arguments, and turning them into a single string that has already had its templating sorted out. (This is for passing the result into another tag function, otherTag perhaps, where I want the second function to treat everything as a single string literal rather than a broken up template.)
For example, tag`Something ${'cooked'}.`; would be equivalent to otherTag`Something cooked.`;.
My current approach
The definition of tag would look something like this:
function tag(textParts, ...expressions) {
const cooked = // an array with a single string value
const raw = // an array with a single string value
return otherTag({ ...cooked, raw });
}
Defining the value of raw is fairly straightforward: I know that String.raw is the tag function I need to call here, so const raw = [String.raw(textParts.raw, ...expressions)];.
But I cannot find anywhere on the internet what function I would call for the cooked part of it. What I want is, if I have tag`Something ${'cooked'}.`;, I want const cooked = `Something ${cooked}.`; in my function. But I can’t find the name of whatever function accomplishes that.
The closest I’ve found was a claim that it could be implemented as
const cooked = [expressions.map((exp, i) => textParts[i] + exp).join('')];
This is wrong—textParts may be longer than expressions, since tag`Something ${'cooked'}.`; gets ['Something ', '.'] and ['cooked'] as its arguments.
Improving this expression to handle that isn’t a problem:
const cooked = [
textParts
.map((text, i) => (i > 0 ? expressions[i-1] : '') + text)
.join(''),
];
But that’s not the point—I don’t want to roll my own here and risk it being inconsistent with the native implementation, particularly if that changes.
The name of the native function to use, ideally with links to and/or quotes from documentation of it.
There isn't one. It is syntax, not a function.
Links to and/or quotes from the spec that defines precisely what the implementation of this function is, so that if I roll my own at least I can be sure it’s up to the (current) specifications.
Section 13.2.8 Template Literals of the specification explains how to process the syntax.
I am trying to process an insert event from the CKEditor 5.
editor.document.on("change", (eventInfo, type, data) => {
switch (type) {
case "insert":
console.log(type, data);
break;
}
});
When typing in the editor the call back is called. The data argument in the event callback looks like approximately like this:
{
range: {
start: {
root: { ... },
path: [0, 14]
},
end: {
root: { ... },
path: [0, 15]
}
}
}
I don't see a convenient way to figure out what text was actually inserted. I can call data.range.root.getNodeByPath(data.range.start.path); which seems to get me the text node that the text was inserted in. Should we then look at the text node's data field? Should we assume that the last item in the path is always an offset for the start and end of the range and use that to substring? I think the insert event is also fired for inserting non-text type things (e.g. element). How would we know that this is indeed a text type of an event?
Is there something I am missing, or is there just a different way to do this all together?
First, let me describe how you would do it currently (Jan 2018). Please, keep in mind that CKEditor 5 is now undergoing a big refactoring and things will change. At the end, I will describe how it will look like after we finish this refactoring. You may skip to the later part if you don't mind waiting some more time for the refactoring to come to an end.
EDIT: The 1.0.0-beta.1 was released on 15th of March, so you can jump to the "Since March 2018" section.
Until March 2018 (up to 1.0.0-alpha.2)
(If you need to learn more about some class API or an event, please check out the docs.)
Your best bet would be simply to iterate through the inserted range.
let data = '';
for ( const child of data.range.getItems() ) {
if ( child.is( 'textProxy' ) ) {
data += child.data;
}
}
Note, that a TextProxy instance is always returned when you iterate through the range, even if the whole Text node is included in the range.
(You can read more about stringifying a range in CKEditor5 & Angular2 - Getting exact position of caret on click inside editor to grab data.)
Keep in mind, that InsertOperation may insert multiple nodes of a different kind. Mostly, these are just singular characters or elements, but more nodes can be provided. That's why there is no additional data.item or similar property in data. There could be data.items but those would just be same as Array.from( data.range.getItems() ).
Doing changes on Document#change
You haven't mentioned what you want to do with this information afterwards. Getting the range's content is easy, but if you'd like to somehow react to these changes and change the model, then you need to be careful. When the change event is fired, there might be already more changes enqueued. For example:
more changes can come at once from collaboration service,
a different feature might have already reacted to the same change and enqueued its changes which might make the model different.
If you know exactly what set of features you will use, you may just stick with what I proposed. Just remember that any change you do on the model should be done in a Document#enqueueChanges() block (otherwise, it won't be rendered).
If you would like to have this solution bulletproof, you probably would have to do this:
While iterating over data.range children, if you found a TextProxy, create a LiveRange spanning over that node.
Then, in a enqueueChanges() block, iterate through stored LiveRanges and through their children.
Do your logic for each found TextProxy instance.
Remember to destroy() all the LiveRanges afterwards.
As you can see this seems unnecessarily complicated. There are some drawbacks of providing an open and flexible framework, like CKE5, and having in mind all the edge cases is one of them. However it is true, that it could be simpler, that's why we started refactoring in the first place.
Since March 2018 (starting from 1.0.0-beta.1)
The big change coming in 1.0.0-beta.1 will be the introduction of the model.Differ class, revamped events structure and a new API for big part of the model.
First of all, Document#event:change will be fired after all enqueueChange blocks have finished. This means that you won't have to be worried whether another change won't mess up with the change that you are reacting to in your callback.
Also, engine.Document#registerPostFixer() method will be added and you will be able to use it to register callbacks. change event still will be available, but there will be slight differences between change event and registerPostFixer (we will cover them in a guide and docs).
Second, you will have access to a model.Differ instance, which will store a diff between the model state before the first change and the model state at the moment when you want to react to the changes. You will iterate through all diff items and check what exactly and where has changed.
Other than that, a lot of other changes will be conducted in the refactoring and below code snippet will also reflect them. So, in the new world, it will look like this:
editor.document.registerPostFixer( writer => {
const changes = editor.document.differ.getChanges();
for ( const entry of changes ) {
if ( entry.type == 'insert' && entry.name == '$text' ) {
// Use `writer` to do your logic here.
// `entry` also contains `length` and `position` properties.
}
}
} );
In terms of code, it might be a bit more of it than in the first snippet, but:
The first snippet was incomplete.
There are a lot fewer edge cases to think about in the new approach.
The new approach is easier to grasp - you have all the changes available after they are all done, instead of reacting to a change when other changes are queued and may mess up with the model.
The writer is an object that will be used to do changes on the model (instead of Document#batch API). It will have methods like insertText(), insertElement(), remove(), etc.
You can check model.Differ API and tests already as they are already available on master branch. (The internal code will change, but API will stay as it is.)
#Szymon Cofalik's answer went into a direction "How to apply some changes based on a change listener". This made it far more complex than what's needed to get the text from the Document#change event, which boils down to the following snippet:
let data = '';
for ( const child of data.range.getChildren() ) {
if ( child.is( 'textProxy' ) ) {
data += child.data;
}
}
However, reacting to a change is a tricky task and, therefore, make sure to read Szymon's insightful answer if you plan to do so.
I have a set of JavaScript functions that handle certain objects. All these objects have the following flexibility:
Fields can be accessed like this: data[prop][sub-prop][etc.], OR
Like this (with a type sub-structure): data[TYPE][prop][sub-prop][etc.].
The object is accessed in many places, and the condition (let's call it is_mixed) is relevant everywhere.
I thought of the following alternatives:
Always access data like this: (is_mixed ? data[TYPE] : data)[prop][sub-prop][etc.]
Have a function called getData and always access data like this: getData()[prop][sub-prop][etc.].
The function code would be:
function getData() { return is_mixed ? data[TYPE] : data; }
Run the following on every new input: if (is_mixed) { data = data[TYPE]; }
It seems to me that options 2 and 3 might be copying the object data (which might be big) and performance is important here (I didn't find the literature to support this guess), but option 1 will make the code big and ugly.
Is there a better option? What's the best way to acheive this in terms of performance, code quality and basically best practices?
It seems to me that options 2 and 3 might be copying the JSON content
No, they won't. They both just copy an object reference, which is quick and cheap (like copying a boolean). #2 is of course slightly slower, since it's a function call, but if it's used a lot, any decent JavaScript engine will inline the function anyway, giving you the benefit of modularity at the source level. (It can take thousands of calls to the function in a shortish period of time to make that kick in, though; e.g., a modern engine only bothers with optimization when it looks likely to matter.)
I am trying to use ngGrid to make somewhat of a "tree-control" which I can build dynamically by calling API's. ngGrid allows for grouping on rows, yet the nature of it requires that all rows be present at the beginning. This is unfortunate for the fact that an API to pull back all generation data for a File Integrity Monitoring system would be insanely slow and stupid. Instead, I wish to build the "tree" dynamically on the expansion of each generation.
I am trying to inject children (ngRows) into a group-row (ngAggregate) on a callback, yet I do not think that I am calling the correct constructor for the ngRows for the fact that the rows are ignored by the control
Through the use of the aggregateTemplate option on the gridOptions for ngGrid, I have been able to intersept the expansion of a group quite easily.
(maybe not easily, but still)
I've replaced the ng-click of the default template:
ng-click="row.toggleExpand()"
with:
ng-click="$parent.$parent.rowExpanded(row)"
I know that it's a bit of a hack, but we can get to that later. For now, it gets the job done.
The way that I discovered how to work my way up the $scope to my rowExpanded function was by setting a breakpoint in ngGrid's "row.toggleExpand" function and calling it from the template as so:
ng-click="row.toggleExpand(this)"
Once I retrieve the group I want, I call an API to get the children for said group. I then need to make the return as children of the row. I decided to do this by calling ngGrid's ngRow factory:
row.children = [];
for(var i = 0; i < childData.length; i++)
{
row.children[row.children.length] = row.rowFactory.buildEntityRow(childData[i], i);
}
row.toggleExpand();
... yet this does not appear to be working. The rows are not showing up after I do the expand! Why won't my rows show up?
Here's my current Plunker!
By the way
I've placed a debugger statement within the group-expand callback. As long as you have your debugger open, you should catch a breakpoint on the expansion of a group.
Thanks everybody!
I found my answer, I'm an idiot....
I got this control working, and then realized that it was a total hack, that I could have used the control the way it was meant to be used and it would have worked much better, had much better work-flow, and it would have saved me an entire day of development. If you are wondering how you use the control this way, the answer is that you don't.
I got the stupid thing to work by updating my data structure after the round trip and forcing the grid to refresh, pretty obvious. I had to set the grid options so that groups were always expanded and I had to control the collapser icon logic myself, outside of ngGrid. I never called row.toggleExpand. I also hid any rows with null values by a function call within an ng-if on my rowTemplate. After all that was said and done, I put my foot in my mouth.
I am have some JavaScript functions that run on both the client (browser) and the server (within a Java Rhino context). These are small functions - basically little validators that are well defined and don't rely upon globals or closures - self-contained and portable.
Here's an example:
function validPhoneFormat(fullObject, value, params, property) {
var phonePattern = /^\+?([0-9\- \(\)])*$/;
if (value && value.length && !phonePattern.test(value))
return [ {"policyRequirement": "VALID_PHONE_FORMAT"}];
else
return [];
}
To keep things DRY, my server code gets a handle on each of these functions and calls toString() on them, returning them to the browser as part of a JSON object. Something like this:
{ "name" : "phoneNumber",
"policies" : [
{ "policyFunction" : "\nfunction validPhoneFormat(fullObject, value, params, property) {\n var phonePattern = /^\\+?([0-9\\- \\(\\)])*$/;\n if (value && value.length && !phonePattern.test(value)) {\n return [{\"policyRequirement\":\"VALID_PHONE_FORMAT\"}];\n } else {\n return [];\n }\n}\n"
}
]
}
My browser JS code then takes this response and creates an instance of this function in that context, like so:
eval("var policyFunction = " + this.policies[j].policyFunction);
policyFailures = policyFunction.call(this, form2js(this.input.closest("form")[0]), this.input.val(), params, this.property.name));
This all works very well. However, I then run this code through JSLint, and I get back this message:
[ERROR] ValidatorsManager.js:142:37:eval is evil.
I appreciate that often, eval can be dangerous. However, I have no idea how else I could implement such a mechanism without using it. Is there any way I can do this and also pass through the JSLint validator?
I wouldn't worry about it since you are only passing these function strings from the server to the client, and are thus in control of what will be evaluated.
On the other hand, if you were going the other direction and doing the evals of client-passed code on the server, that would be an entirely different story...
Update:
As disabling the validation option in your comment may cause you to miss future errors, I would instead suggest passing the function name rather than the entire function and have the function library mirrored on the server and client. Thus, to call the function, you'd use the following code:
var policyFunction = YourLibraryName[this.policies[j].policyFunctionName];
var policyArguments = this.policies[j].policyArguments;
policyFunction.apply(this, policyArguments);
Update 2:
I was able to validate the following code with JSLint successfully, which essentially allows you to "turn off" validation for the vast minority of cases where eval is appropriate. At the same time, JSLint still validates normal eval calls, and all uses of this method should throw up flags for future developers to avoid using it/refactor it out where possible/as time allows.
var EVAL_IS_BAD__AVOID_THIS = eval;
EVAL_IS_BAD__AVOID_THIS(<yourString>);
Dont encode a function as a string in JSON. JSON is for content, which you are confounding with behavior.
Instead, I suppose you could return JS files instead, which allow real functions:
{ name : "phoneNumber",
policies : [
{ policyFunction : function() {
whateverYouNeed('here');
}
}
]
}
But while that solves the technical issue, it's still not a great idea.
The real solution here is to move your logic out of your content entirely. Import a JS file full of little validation functions and call them as needed based on a dataType property in your JSON or something. If this functions are as small and portable as you say, this should be trivial to accomplish.
Getting your data all tangled up with your code usually leads to pain. You should statically include your JS, then dynamically request/import/query for your JSON data to run through your statically included code.
I would avoid using eval in all situations. There's no reason you can't code around it. Instead of sending code to the client, just keep it hosted on the server in one contained script file.
If that's not doable, you can also have a dynamically generated javascript file then pass in the necessary parameters via the response, and then dynamically load the script on the client side. There's really no reason to use eval.
Hope that helps.
You can use
setInterval("code to be evaluated", 0);
Internally, if you pass setInterval a string it performs a function similar to eval().
However, I wouldn't worry about it. If you KNOW eval() is evil, and take appropriate precautions, it's not really a problem. Eval is similar to GoTo; you just have to be careful and aware of what you're doing to use them properly.
With very little parsing you could have had it like so:
var body = this.policies[j].policyFunction.substr;
body = body.substr(body.indexOf("(") + 1);
var arglist = body.substr(1, body.indexOf(")"));
body = body.substr(arglist.length + 1);
var policyFunction = new Function(arglist, body);
Which would provide a bit of validation, avoid the literal use of eval and work synchronously with the code. But it is surely eval in disguise, and it is prone to XSS attack. If the malevolent person can get their code loaded and evaluated this way - it will not save you. So, really, just don't do it. Add a <script> tag with the proper URL and that would be certainly safer. Well, you know, better safe then sorry.
PS. My apologises if the code above doesn't work, it only shows the intent, I've not tested it, and if I made a mistake at counting parenthesis or some such - well, you should get the idea, I'm not advertising it by any means.
DRY is definitely something I agree with, however there is a point where copy+pasting is more efficient and easy to maintain than referencing the same piece of code.
The code you're saving yourself from writing seems to be equivalent to a clean interface, and simple boiler plate. If the same code is being used on both the server and the client, you could simply pass around the common pieces of the function, rather than the whole function.
Payload:
{
"name": "phoneNumber",
"type": "regexCheck",
"checkData": "/^\\+?([0-9\\- \\(\\)])*$/"
}
if(payload.type === "regexCheck"){
const result = validPhoneFormat(fullObject, value, payload.checkData)
}
function validPhoneFormat(fullObject, value, regexPattern) {
if (value && value.length && !regexPattern.test(value))
return [ {"policyRequirement": "VALID_PHONE_FORMAT"}];
else
return [];
}
This would give you the ability to update the regex from a single location. If the interface changes it does need to be updated in 2 places, but I wouldn't consider that a bad thing. If the client is running code, why hide the structure?
If you really, really want to keep both the object structure and the patterns in one place - extract it to a single API. Have a "ValidatePhoneViaRegex" api endpoint which is called by all places you'd be passing this serialized function to.
If all of this seems like too much effort, set jslint to ignore your piece of code:
"In JSHint 1.0.0 and above you have the ability to ignore any warning with a special option syntax. The identifier of this warning is W061. This means you can tell JSHint to not issue this warning with the /*jshint -W061 */ directive.
In ESLint the rule that generates this warning is named no-eval. You can disable it by setting it to 0, or enable it by setting it to 1."
https://github.com/jamesallardice/jslint-error-explanations/blob/master/message-articles/eval.md
I would prefer to see copy+pasted code, a common api, or receiving parameters and copy+pasted boiler plate than magical functions passed in from the server to be executed.
What happens if you get a cross-browser compatibility error with one of these shared functions?
Well, the first thing to bear in mind is that jsLint does make the point that "it will hurt your feelings". It's designed to point out where you're not following best practices -- but code that isn't perfect can still work just fine; there's no compulsion upon you to follow jsLint's advice.
Having said that, eval is evil, and in virtually all cases there is always a way around using it.
In this case, you could use a library such as require.js, yepnope.js or some other library that is designed to load a script separately. This would allow you to include the javascript functions you need dynamically but without having to eval() them.
There are probably several other solutions as well, but that was the first one that came to my mind.
Hope that helps.