I have a JavaScript file which exports an array of objects in this format:
export const refData = [
{
type: "A",
},
{
type: "D",
},
{
type: "C",
},
...
{
type: "A",
},
];
I also have some image files in my projects that are named in numerical, sequential order, such that their filenames corresponds to an element in the array. The intention is that refData contains properties for files in my project. For example, 0.jpg corresponds to the element at index 0 in the refData array, and 3.jpg corresponds to the element at index 3.
The problem is that there's no easy way for me to tell which file corresponds to an element in the array. So if I have 28.jpg and I want to change its type contained in refData to something else, I have to manually count the elements to find out where to make the change.
The easiest solution to this would be if I could somehow see every element's index directly in my code editor. I use VSCode, so some way to notate element index using an extension or something else would be most helpful, though I'm also open to changing editors if this is not possible in VSCode. I looked into inlay hints for VSCode but it doesn't seem to contain this functionality, and manually counting is unsustainable as the array grows larger.
Edit: VS Code may not have this functionality, but Levi's answer below (using Object.defineProperty()) also works for my purposes.
Related
I'm trying to generate a series of actions, where the actions that should be generated depend on what the previous actions were.
Let's say my state is a set of numbers stored as an array:
[1, 2]
And there are the following actions:
{ type: "add", value: number }
{ type: "remove", value: number }
I want to generate a series of actions to dispatch before checking properties of the state. If a remove action is generated, I want to ensure that it's value is in the state.
Valid Examples:
initial state: [1, 2]
[{ type: "remove", value: 1 }, { type: "remove", value: 2 }]
[{ type: "add", value: 3 }, { type: "remove", value: 3 }]
Invalid Examples:
initial state: [1, 2]
[{ type: "remove", value: 3 }]
[{ type: "remove", value: 2 }, { type: "remove", value: 2 }]
Is this something that is possible using a property based testing library, and if so how would I accomplish it?
I am using https://github.com/dubzzz/fast-check, but if this is easier using another library I'm open to examples from others.
Yes, this is perfectly suited for property-based testing. From a quick skimming of the docs of fast-check, I can see three approaches to this:
Make a precondition. This will ignore tests where an invalid sequence of actions was generated.
Prior to running a remove action, you'd count how often the number was contained in the initial state, how often in add actions before now, and how often in remove actions before now. Then you'd know whether you can remove it another time.
Use model-based testing (also here). This seems to perfectly fit your use case. Each action would be represented by one Command, all commands would simply apply their respective action, and in their check method you would validate whether the action is eligible.
This requires building a model, where you need to make sure that the model is a simplification of the actual state and that it uses a different implementation approach (so that you won't re-implement your bugs here). In your example, that could mean keeping a Set of occurring numbers or a Map of their counts, not an ordered array.
Generate only valid sequences in the first place.
This is much more efficient than the first two approaches, but usually also much more complicated. It might be necessary though if the generated numbers to be removed are too unbiased and rarely ever match one that is in the list. I got two ideas here:
generate your list of actions recursively, and keep a model similar to the one from model-based testing. You have to update it yourself however. With this, you can generate remove actions only exactly for those numbers that are currently in your model.
I'm not sure whether letrec or memo help here, whether you might need to use chain, or ask the library author to provide an extra function for this use case. (Maybe even as part of model-based testing, where Command instances could be dynamically derived from the current model?)
generate a remove action always together with a preceding add action with the same number. After having generated a list of [add(x)] and [add(y), remove(y)] lists, merge these in arbitrary order but keeping the respective order between the elements of each sublist.
This is probably the most elegant method to do this, as it looks nothing like the model of your state. However, I'm pretty certain that you will need to build your own Arbitrary for the randomMerge function - maybe ask the library author for help with this or request a new feature.
I'm wanting to make a system that gets a users inventory then displays it as the image and name. I only know how to do the JSON part and I'm unsure as what to do next.
All I have at the moment is:
http://steamcommunity.com/profiles/<PROFILEID>/inventory/json/753/1
Is anyone able to help me turn that data into what I am looking for?
First off - for CS:GO, at least - the URL you are looking for is:
http://steamcommunity.com/profiles/<PROFILEID>/inventory/json/730/2
The two numbers at the end of the URL refer to the app ID and context ID, respectively. CS:GO's app ID is 730 and most games use a context ID of 2 for user inventories.
The JSON returned from this request is an object in the following format:
{
"success": true,
"rgInventory": { ... },
"rgCurrency": { ... },
"rgDescriptions": { ... },
"more": false,
"more_start": false
}
For the use-case you described (getting the item names and icons), you can ignore everything except the rgDescriptions object. This object contains an object for each item in the user's inventory. The object keys are the result of concatenating the item's classid and instanceid, but that doesn't really matter for you - you can just iterate over it like you would for any other object.
The two data points that you're interested in are market_hash_name, which is the name of the item, and icon_url, which is part of what you need to display the actual image. The full path to the image is https://steamcommunity-a.akamaihd.net/economy/image/{icon_url}. For example, this link loads the icon for a G3SG1 | Polar Camo in my inventory.
One thing to note is that the market_hash_name includes the wear pattern (e.g., Minimal Wear, Factory New, etc.). If you don't need those, you can just use the name from the object.
Should I store objects in an Array or inside an Object with top importance given Write Speed?
I'm trying to decide whether data should be stored as an array of objects, or using nested objects inside a mongodb document.
In this particular case, I'm keeping track of a set of continually updating files that I add and update and the file name acts as a key and the number of lines processed within the file.
the document looks something like this
{
t_id:1220,
some-other-info: {}, // there's other info here not updated frequently
files: {
log1-txt: {filename:"log1.txt",numlines:233,filesize:19928},
log2-txt: {filename:"log2.txt",numlines:2,filesize:843}
}
}
or this
{
t_id:1220,
some-other-info: {},
files:[
{filename:"log1.txt",numlines:233,filesize:19928},
{filename:"log2.txt",numlines:2,filesize:843}
]
}
I am making an assumption that handling a document, especially when it comes to updates, it is easier to deal with objects, because the location of the object can be determined by the name; unlike an array, where I have to look through each object's value until I find the match.
Because the object key will have periods, I will need to convert (or drop) the periods to create a valid key (fi.le.log to filelog or fi-le-log).
I'm not worried about the files' possible duplicate names emerging (such as fi.le.log and fi-le.log) so I would prefer to use Objects, because the number of files is relatively small, but the updates are frequent.
Or would it be better to handle this data in a separate collection for best write performance...
{
"_id": ObjectId('56d9f1202d777d9806000003'),"t_id": "1220","filename": "log1.txt","filesize": 1843,"numlines": 554
},
{
"_id": ObjectId('56d9f1392d777d9806000004'),"t_id": "1220","filename": "log2.txt","filesize": 5231,"numlines": 3027
}
From what I understand you are talking about write speed, without any read consideration. So we have to think about how you will insert/update your document.
We have to compare (assuming you know the _id you are replacing, replace {key} by the key name, in your example log1-txt or log2-txt):
db.Col.update({ _id: '' }, { $set: { 'files.{key}': object }})
vs
db.Col.update({ _id: '', 'files.filename': '{key}'}, { $set: { 'files.$': object }})
The second one means that MongoDB have to browse the array, find the matching index and update it. The first one means MongoDB just update the specified field.
The worst:
The second command will not work if the matching filename is not present in the array! So you have to execute it, check if nMatched is 0, and create it if it is so. That's really bad write speed (see here MongoDB: upsert sub-document).
If you will never/almost never use read queries / aggregation framework on this collection: go for the first one, that will be faster. If you want to aggregate, unwind, do some analytics on the files you parsed to have statistics about file size and line numbers, you may consider using the second one, you will avoid some headache.
Pure write speed will be better with the first solution.
I'm working on implementing an image album and would like an elegant solution for storing the order of its images. The Album model holds an array of references to Images as follows:
// Example album
{
_id: 12345
images: [ObjectId(1), ObjectId(2), ObjectId(3)]
}
// Example image
{
_id: 1
title: My Picture
author: Me
}
I'm trying to figure out how to handle an order update in the database. For example, let's say I'd like to update the array of images from:
[ObjectId(1), ObjectId(2), ObjectId(3)]
to
[ObjectId(3), ObjectId(1), ObjectId(2)]
I'm wondering whether 1) I can directly reorganize an array of references via an update, or 2) if I can, how I can go about actually accessing each element in the array to dictate the new order.
Actually, this solution might be quite useless because the array could be populated asynchronously. An alternative solution would be to ignore what I'm asking for entirely, and store the order of each image individually as follows:
// Image
{
_id: 1
title: My Picture
author: Me
position: 2 // like this
}
This way when I pull down an array of images asynchronously, all I have to do is sort by each image's position field.
A downside to this strategy is that as the number of images gets sufficiently large, reordering will become quite costly. Moving the final element of a 100-length array to the very beginning would require that I update every single image's position value in the database.
Please let me know if you have any suggestions!
Order of elements in array remains the same as it was persisted.
To change the order, update the array with the order you like. E.g.:
db.album.update({_id:1}, {$set:{images:[ObjectId(3), ObjectId(1), ObjectId(2)]}});
I am running a script on a large dataset to expand existing information. e.g:
...
{
id : 1234567890
},
{
id : 1234567891
},
...
becomes
...
{
id : 1234567890,
Name : "Joe"
},
{
id : 1234567891,
Name : "Bob"
},
...
I am doing this via the following code:
for(var cur in members)
{
curMember = members[cur];
// fetch account based on curMember.id to 'curAccount'
if(curAccount != null)
{
curMember.DisplayName = curAccount.DisplayName;
}
}
For the most part, this works as expected. However, once in a while (in the order of tens of thousands of entries), the result looks like this:
...
{
id : 1234567891,
Name : "Bob",
Name : "Bob"
},
...
I now have data which is in an invalid format and cannot be read by the DB, since duplicate property names doesn't make sense. It is occurring for random entries when the script is re-run, not the same ones every time. I need either a way to PREVENT this from happening, or to DETECT that it has happened so I can simply reprocess the entry. Anyone know what's going on here?
EDIT: After further investigation, the problem appears to occur only when the objects being modified come from a MongoDB query. It seems that if code explicitly sets a value to the same element name more than once, the field will be duplicated. All elements of the same name appear to be set to the most recently specified value. If it is only assigned once as in my original problem, it is only duplicated very rarely. I am using MongoDB 2.4.1.
Got it all figured out. MongoDB has a bug up to shell version 2.4.1 which allows duplicate element names to be set for query result objects. Version 2.4.3, released just this Monday, has a fix. See https://jira.mongodb.org/browse/SERVER-9066.
I don't really get your problem. If you apply identical property names to an object in ECMAscript, that property will just get overwritten. The construct in your snippet, can never be exist in that form on a live-object (excluding JSON strings).
If you just want to detect the attempt to create a property which is already there, you either need to have that object reference cached beforehand (so you can loop its keys) - or -
you need to apply ES5 strict mode.
"use strict";
at the top of your file or function. That will assure that your interpreter will throw an exception on the attempt to create two identical property keys. You can of course, use a try - catch statement to intercept that failure then.
Seems like you cannot intercept errors which get thrown because of strict mode violation.