I'm trying to extract an embedded resource from a .exe file. I can easily get the embedded resource out from C# code like so:
string changeLog;
/* Get the changelog embedded resource */
using (var stream = Assembly.GetExecutingAssembly().GetManifestResourceStream("SIPBandVoIPClient.CHANGELOG.md"))
using (var reader = new StreamReader(stream))
{
changeLog = reader.ReadToEnd();
}
However, now I'm trying to get the embedded resource out from JavaScript/nodeJS.
I tried just 'grepping' for some of the text that I knew was in the file, but I couldn't find it. Perhaps it's base64 encoded?
Edit: I tried embedding the resource in a different way - by using the properties section in Visual Studio, instead of adding the file to the project, and marking it as a resource.
Now, I can access it like so:
string changeLog = Encoding.UTF8.GetString(Resources.CHANGELOG);
Furthermore, I can find the text verbatim in the .exe. Before the text, there is some (binary?) data like this:
(Apologies for the screenshot, the text itself would not render nicely due to non printable characters, I assume)
I'm wondering if there's a way to convert this from whatever binary rubbish it is to a tag I can use to search the .exe. I could of course search for the "All notable changes to this project..." and back up a bit, but it seems a bit error prone. Would also be handy to know where to stop accurately.
I did consider using the node-ffi module, and hooking up to the Windows Api functions, however, this would be Windows only, and I think I need Linux support.
Related
I'm building an app that lets the user drag data items out of the browser onto their OS desktop as individual files.
The data I'm exporting is JSON, and I'm setting the MIME type appropriately, but my Mac is wrapping the data in a bunch of strange binary that makes the file worthless for all its intended purposes (which expect pure JSON).
Here's the code I'm using to set up the drag-out-of-browser data:
function onDragStart( event, itemId ) {
let data = itemToJSON(itemId)
event.dataTransfer.setData('application/json', data)
event.dataTransfer.setData('text/plain', data)
}
Here's a screenshot of the file when opened in VSCode. (I'm not pasting the text into this post because I don't expect the non-ASCII to survive a web form.1)
In the screenshot above, the data is the string "TOM RULES".
It includes some plain text that is suggestive, like "bplist", "com.apple.traditional-mac-plain-text", and "public.utf8-plain-text". You can also see from the VSCode breadcrumbs that the OS chose the proprietary ".textClipping" file extension despite my declaring the data type. (I would expect a sane OS to choose ".txt" and ".json" for text/plain and application/json.)
It makes no difference whether I use the proper MIME type for JSON or just stick with plain text: I get the same garbage binary wrapper every time.
This question is also concerned with dragging custom text data out of a browser, but there's no mention of this problem. It's also 11 years old, and I'm already doing what they suggest: declaring my data as plain text.
Is there anything I can do to avoid this, so that the file contains only the literal ASCII text I provide?
(Am using Firefox for dev, although I suspect this is about the OS and not about the browser.)
1 If anybody really needs the actual file data, I can paste a base64-encoded representation here. For now, I'm not going to the trouble.
I am new to Tensorflow, I have developed a model which detects a special kind of card using the Tensorflow, OpenCV, I am properly able to detect the card using my webcam in the offline mode but I want to migrate it to the web (tensorflowjs) but I am facing some issues in regarding to the conversion.
I have the checkpoints, meta, data files along with the frozen inference graph pb file
I have generated the nodes names list using the following code
import tensorflow as tf
modelName = './<path_to_meta_file>'
tf.reset_default_graph()
with tf.Session() as sess:
saver = tf.train.import_meta_graph(modelName)
graph_def = tf.get_default_graph().as_graph_def()
node_list=[n.name for n in graph_def.node]
print(node_list)
here is the output file
Output node names using the tensorflow import_meta_graph()
my concern is what should I pass in the output nodes names in the below command
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='<what to write here from that output txt file?>' ./frozen_inference_graph.pb ./web_model
I want to run my existing model using the real webcam on the client side, any solutions and suggestion would be highly appreciated
Setup Details:
Tensorflow 1.12.0
Python 3.5 using the Anaconda.
For me the tensorflowjs_converter ran through by using "Placeholder" as an argument for --output_node_names:
tensorflowjs_converter --input_format=tf_frozen_model --output_format=tensorflowjs --output_node_names=Placeholder ./frozen_inference_graph.pb ./web_model
Not sure if this is a valid solution, though, because even though the converter runs without an error message, I get the output from the model_pruner: "Graph size before: 1187 nodes, 1221 edges. Graph size after: 1 nodes, 0 edges." - so my input is effectively reduced to nothing, which doesn't seem to be right either.
Update:
After two more hours of research I found that the tensorboard indicates the node names (refer to this page). In my case it turned out that --output_node_names=final_result - and the Placeholder is not more than what the name already says, a placeholder that needs to be filled with valid content.
I'm using this technique to extract the click events in my SharePoint site. It uses jquery and a regular expression to capture clicks and report them as events to google analytics.
I'm also just past total newbie with regex -- It is starting to make some sense to me, but I have a lot to learn still. So here goes.
I have a preapproved list of filetypes that I am interested in based on the site listed above.
var filetypes = /\.(zip|pdf|doc.*|xls.*|ppt.*|mp3|txt|wma|mov|avi|wmv|flv|wav|jpg)$/i;
But it isn't quite working like I need. With the $ I assume it is trying to match to the end of the line. But often in SharePoint we get links like this:
example.org/sharepoint/_layouts/15/wopiframe.aspx?sourcedoc=/sharepointlibrary/the%20document%20name.docx&action=default&defaultitemopen=1
The two problems I have are, I can't count on the file name being before the query or hash and I can't count on it being at the end. And all the different Microsoft Office extensions.
I found this thread on extracting extensions, but it doesn't seem to work correctly.
I've put together this version
var filetypes = \.(zip|pdf|doc|xls|ppt|mp3|txt|wma|mov|avi|wmv|flv|wav|jpg)[A-Za-z]*
I changed the office bits from doc.* to just plain doc and added the optional alpha character afterwards. And removed the $ end anchor. It seems to be working with my test sample, but I don't know if there are gotchas that I don't understand.
Does this seem like a good solution or is there a better way to get a predetermined list of extensions (including for example the Office varions like doc, docx, docm) that is either before the query string or might be one parameter in the query string?
I would go with the following which matches file name and extension:
/[^/]+\.(zip|pdf|doc[xm]?|xlsx?|ppt|mp3|txt|wma|mov|avi|wmv|flv|wav|jpg)/i
Outputs the%20document%20name.docx from you example.
There may be other formats that it might not work on but should get you what you want.
In an experimental extension I am working on I use a function to get the source of a webpage and assign it to a variable. It worked perfectly fine. However I want to change the way it works and get the content from a txt file.
I am hosting a txt file like: http//1.2.3.4/1.txt.
What I want is to assign the contents of this txt file to a variable.
Function is here: http://jsfiddle.net/qumsm/.
(Function is not mine. I got it from another extension xpi which I cant remember right now. Respects to the coder of it.)
The function produces "ÿþP" this result which I dont get.
That's a byte order mark, the file you are looking at seems to be using UTF-16 LE encoding. You need to use nsIConverterInputStream instead of nsIScriptableInputStream when reading in that data and specify the correct encoding to convert from. nsIScriptableInputStream is only useful when reading in ANSI data, not Unicode. See code example on MDN.
I'm trying to determine the best way to implement localization into one of our web apps. This app is going to have a large number of javascript files, which will need to be localized as well.
Localizing the .net code is straight forward enough. We have a file called WebResources.resx which contains all strings in english (our fall back language). Then we just add additional files with alternative localized information (eg: WebResources.es-mx.resx). Then, bam! .Net pretty much takes care of the rest.
Pretty sweet. But when it comes to javascript, not so fast. Reading on MSDN they recommend:
You create a separate script file for each supported language and culture. In each script file, you include an object in JSON format that contains the localized resources values for that language and culture.
This seems like a maintenance nightmare that I'd like to avoid. Plus I'd like to avoid having to use the asp.net ScriptManager. So I got the bright idea of trying to use the resource files in my .js files. EG foobar.js:
function showGenericError(){
alert('<% =Resources.WebResources.JsGenericError %>');
}
This unfortunately does not work as the .NET does not seem to do any processing on .js files. So the next idea I got was from the answer on this thread. It recommended having a javascript file which contained all your language strings. This feels like a waist of resources since at run time I only need one language, not all of them.
This leads me to the solution I'm planning on implementing. I plan to have a generic handler that writes out JSON that is localized for the language the current user is in need of. Here is a sample of what the .ashx page will look like:
public void ProcessRequest (HttpContext context) {
context.Response.ContentType = "application/json";
StringBuilder json = new StringBuilder();
using (StringWriter jsonStringWriter = new StringWriter(json))
{
using (JsonTextWriter jsonWriter = new JsonTextWriter(jsonStringWriter))
{
jsonWriter.WriteStartObject();
jsonWriter.WritePropertyName("genericErrorMessage");
jsonWriter.WriteValue(Resources.WebResources.GenericErrorMessage);
jsonWriter.WriteEndObject();
}
}
context.Response.Write("var webResources = " + json.ToString());
}
In the head of my pages I will have:
<script type="text/javascript" src="js/webResources.js.ashx"></script>
Then my js file will look like:
function showGenericError(){
alert(webResources.genericErrorMessage);
}
Almost seems too easy, right? So my question is, does this make sense? Am I missing a "gotcha" somewhere? What are the downsides? Is there a better way to do this?
I posted a similar question a while ago, and this is what I came up with:
Localize javascript messages and validation text
The advantage here is that you can share resources used in regular .net pages.
Your approach looks fine as well.
Here is the way i did it, just in case someone finds it useful.
I didnt want to put any Razor on JS, because of CSP i kept JS files separated from the cshtml.
So I added a element in the Shared cshtml, with the content of an array of arrays, each element of the array is a Key/Value pair with the name and the localized string as returned by Razor>
<meta name="resources" content="[
['name1', '#HttpUtility.JavaScriptStringEncode(Resources.name1)'],
['name2', '#HttpUtility.JavaScriptStringEncode(name2)']
]" />
Then in the js file, i convert this into a dictionary:
let livstrResMap = document.querySelector("meta[name='resources']").getAttribute("content");
livstrResMap = livstrResMap.replace(/'/g, '"')
let lioJSN = JSON.parse(livstrResMap)
let mcvcodResources = new Map(lioJSN);
Finally i use the localized string using the Format helper defined here
alert(mcvcodResources.get('name1'));