Unity WebGL External Assets - javascript

I'm developing some webGL project in Unity that has to load some external images from a directory, it runs all fine in the editor, however when I build it, it throws a Directory Not Found exception in web console. I am putting the images in Assets/StreamingAssets folder, that will become StreamingAssets folder in the built project (at root, same as index.html). Images are located there, yet browser still complains about not being able to find that directory. (I'm opening it on my own computer, no running web server)
I guess I'm missing something very obvious, but it seems like I could use some help, I've just started learning unity a week ago, and I'm not that great with C# or JavaScript (I'm trying to get better...) Is this somehow related to some javascript security issues?
Could someone please point me in the right direction, how I should be reading images(no writing need to be done) in Unity WebGL?
string appPath = Application.dataPath;
string[] filePaths = Directory.GetFiles(appPath, "*.jpg");
According to unity3d.com in webGL builds everything except threading and reflection is supported, so IO should be working - or so I thought:S
I was working around a bit and now I'm trying to load a text file containing the paths of the images (separated by ';'):
TextAsset ta = Resources.Load<TextAsset>("texManifest");
string[] lines = ta.text.Split(';');
Then I convert all lines to proper path, and add them to a list:
string temp = Application.streamingAssetsPath + "/textures/" + s;
filePaths.Add(temp);
Debug.Log tells me it looks like this:
file://////Downloads/FurnitureDresser/build/StreamingAssets/textures/79.jpg
So that seems to be allright except for all those slashes (That looks a bit odd to me)
And finally create the texture:
WWW www = new WWW("file://" + filePaths[i]);
yield return www;
Texture2D new_texture = new Texture2D(120, 80);
www.LoadImageIntoTexture(new_texture);
And around this last part (unsure: webgl projects does not seem easily debuggable) it tells me: NS_ERROR_DOM_BAD_URI: Access to restricted URI denied
Can someone please enlighten me what is happening? And most of all, what would be proper to solution to create a directory from where I can load images during runtime?

I realise this question is now a couple of years old, but, since this still appears to be commonly asked question, here is one solution (sorry, the code is C# but I am guessing the javascript implementation is similar). Basically you need to use UnityWebRequest and Coroutines to access a file from the StreamingAssets folder.
1) Create a new Loading scene (which does nothing but query the files; you could have it display some status text or a progress bar to let the user knows what is happening).
2) Add a script called Loader to the Main Camera in the Loading scene.
3) In the Loader script, add a variable to indicate whether the asset has been read successfully:
private bool isAssetRead;
4) In the Start() method of the Loading script:
void Start ()
{
// if webGL, this will be something like "http://..."
string assetPath = Application.streamingAssetsPath;
bool isWebGl = assetPath.Contains("://") ||
assetPath.Contains(":///");
try
{
if (isWebGl)
{
StartCoroutine(
SendRequest(
Path.Combine(
assetPath, "myAsset")));
}
else // desktop app
{
// do whatever you need is app is not WebGL
}
}
catch
{
// handle failure
}
}
5) In the Update() method of the Loading script:
void Update ()
{
// check to see if asset has been successfully read yet
if (isAssetRead)
{
// once asset is successfully read,
// load the next screen (e.g. main menu or gameplay)
SceneManager.LoadScene("NextScene");
}
// need to consider what happens if
// asset fails to be read for some reason
}
6) In the SendRequest() method of the Loading script:
private IEnumerator SendRequest(string url)
{
using (UnityWebRequest request = UnityWebRequest.Get(url))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError)
{
// handle failure
}
else
{
try
{
// entire file is returned via downloadHandler
//string fileContents = request.downloadHandler.text;
// or
//byte[] fileContents = request.downloadHandler.data;
// do whatever you need to do with the file contents
if (loadAsset(fileContents))
isAssetRead = true;
}
catch (Exception x)
{
// handle failure
}
}
}
}

Put your image in the Resources folder and use Resources.Load to open the file and use it.
For example:
Texture2D texture = Resources.Load("images/Texture") as Texture2D;
if (texture != null)
{
GetComponent<Renderer>().material.mainTexture = texture;
}
The directory listing and file APIs are not available in webgl builds.
Basically no low level IO operations are supported.

Related

Virtual paths from the client to real paths on the server

The client is supposed to see just a directory and its contents on the server (FS_ROOT).
And the server is supposed to convert the paths that it receives from the client to real paths that exist and do the file operations that the client requested on them:
I made these 2 functions to handle that and I want to ask if they are secure enough. I mean there should be no way for the client to fool the server to do something outside FS_ROOT
function fromVirtualPath(virtPath){
if(virtPath === '/' || virtPath === '.')
return FS_ROOT;
virtPath = virtPath.trim();
if(virtPath[0] === '/')
virtPath = virtPath.substr(1);
const absPath = path.resolve(FS_ROOT, virtPath);
if(absPath.indexOf(FS_ROOT) !== 0)
throw new Error('Outside root dir - no permissions!');
return absPath;
}
function toVirtualPath(absPath){
return '/' + path.relative(FS_ROOT, absPath);
}
Example real path: /www/site.com/public_html/yo
Client should see: /yo
About fromVirtualPath I would simply move the line virtPath = virtPath.trim(); to be the first line of the function, then it's ok.
If the values passed to toVirtualPath are always return values of fromVirtualPath, yes it is secure enough; other wise we could check if the value is a good absPath.
function fromVirtualPath(virtPath) {
virtPath = virtPath.trim();
if (virtPath === '/' || virtPath === '.')
return FS_ROOT;
if (virtPath[0] === '/')
virtPath = virtPath.substr(1);
const absPath = path.resolve(FS_ROOT, virtPath);
if (absPath.indexOf(FS_ROOT) !== 0)
throw new Error('Outside root dir - no permissions!');
return absPath;
}
function toVirtualPath(absPath) {
if (absPath.indexOf(FS_ROOT) !== 0)
throw new Error('Bad absolute path!');
return '/' + path.relative(FS_ROOT, absPath);
}
Your code is a bit insecure until you make use of the techniques provided by NODE.JS in the mentioned article. Try implementing the following code,
function fromVirtualPath(virtPath) {
virtPath = virtPath.trim();
if (virtPath === '/' || virtPath === '.')
return FS_ROOT;
if (virtPath.indexOf('\0') !== -1)
throw new Error('That was evil.');
const absPath = path.join(FS_ROOT, virtPath);
if (absPath.indexOf(FS_ROOT) !== 0)
throw new Error('Outside root dir - no permissions!');
return absPath;
}
function toVirtualPath(absPath) {
return '/' + path.relative(FS_ROOT, absPath);
}
The following article from NODE.JS will be really helpful to you.
"How can I secure my code?"
Poison Null Bytes
Poison null bytes are a way to trick your code into seeing another
filename than the one that will actually be opened.
if (filename.indexOf('\0') !== -1) {
return respond('That was evil.');
}
Preventing Directory Traversal
This example assumes that you already checked the
userSuppliedFilename variable as described in the "Poison Null
Bytes" section above.
var rootDirectory = '/var/www/'; // this is your FS_ROOT
Make sure that you have a slash at the end of the allowed folders name
you don't want people to be able to access /var/www-secret/, do you?.
var path = require('path');
var filename = path.join(rootDirectory, userSuppliedFilename);
Now filename contains an absolute path and doesn't contain ..
sequences anymore - path.join takes care of that. However, it might
be something like /etc/passwd now, so you have to check whether it
starts with the rootDirectory:
if (filename.indexOf(rootDirectory) !== 0) {
return respond('trying to sneak out of the web root?');
}
Now the filename variable should contain the name of a file or
directory that's inside the allowed directory (unless it doesn't
exist).
Security is a complex matter. And you can never be sure.
Despite the fact that I couldn't find any flows in #RahulVerma answer I'll add my 2 cents...
The link that #RahulVerma posted is official but not a documentation per se. And in the documentation there is nothing about Poison Null Bytes ...strange isn't it.
And that makes you think: maybe, just maybe, when the fs and/or path modules were written authors didn't put enough effort into security considerations, or just missed that. Yes, maybe there are some good reasons for you and not the fs/path to handle the \0. But also wouldn't it be better if everyone was protected from \0 by default? And only for some rear occasions you could explicitly set an option to allow \0 in paths.
So... what am I trying to say is: security is hard even for the best of us, and without proper peer review (currently, less than 100 views on this question do not strike me as a "proper peer review") or, better yet, a history of successful time in production, you should not be satisfied with these answers (my included) saying "It's OK, if you add this or that".
Why don't you use some code that already was tested in battles instead of trying to write a secure code by yourself?
E.g serve-static is used in Express.
(Probably it doesn't meet your needs - it's static after all, but you get the idea)
Even if you don't want another dependency in your project you can at least study and copy from the implementation that proved itself. (But, yes, it doesn't seem different from the #RahulVerma answer)
That said. I'd like to point out that:
If you'd copy the implementation, you can make a mistake while doing so.
Even if your code is safe, consider how safe do you manage your code. Will it be safe tomorrow?
Even well tested libraries and engines can, and often do, have bugs, and fall prey to 0day exploits
Oh! Just found: https://github.com/autovance/ftp-srv/issues/167
It's about the library that was suggested in another question of yours.
So, if you decide (or if you'll be assured) that now you code is surely safe don't stop on that! Add an extra layer of security to it anyway:
restrict the server's access to folders outside of the /www/site.com/public_html/ on an OS level.
The following principles can be applied to secure client access to paths relative to the web root:
Restrict access outside of your public web root folder to your
service. Rationale: begin with ZERO trust.
Split the path provided by the user into parts. This will remove leading '/' and all '/' separators leaving only the parts of the path. Better yet, use whitelisting for path parts to restrict acceptable characters in a path part using a regular expression. Rationale: sanitize user input
Validate each part sequentially for existence assuming that the first part starts from the web root as it is intended. Disallow .. (parent dir) in part names (to prevent traversal outside the web root folder). Rationale: sanitize user input and validate user input
Avoid using symbolic links under the web root folder (to prevent
traversal outside the web root folder). Rationale: reduce attack surface
Fail early with an error upon encountering the first invalid part. Rationale: reduce attack surface
To optimize system calls, you can do the check for .. and part whitelisting in one pass. If there are any .. in the path or offending parts, return an error. Otherwise, split the parts and rebuild the absolute path string by concatenating them with your web root and do one existence check instead of multiple folder existence checks along the path.
Instead of trying to validate every path yourself, let the operating system do it for you! This is a good example of an application that could use a chroot.
Here is an example of an npm library which creates a chroot.
> var chroot = require("chroot")
> var fs = require("fs")
> chroot('/virtual/root/here', 'nobody')
> fs.readdir(".", function(err, files) { console.log(files); }) // Lists virtual root
> fs.readdir("..", function(err, files) { console.log(files); }) // Also lists virtual root
> fs.readdir("/", function(err, files) { console.log(files); }) // ALSO lists virtual root
Should you run this script as root, it immediately changes the user to "nobody" and sandboxes you to your virtual root. This prevents the script from accessing anything outside it, and the program can't chroot out either, as it's no longer running as root.
Now that you are chrooted into your virtual root, using "/" will give you a directory listing of your virtual root - essentially, you can use your virtual path directly in fs.readdir()!
Need to access some specific files outside the new root? Use microservices! You can run a node.js instance in the background as your file accessor, and communicate between your main server and your file accessor. Having two nodejs instances not only allows your background task to sandbox itself, but also allows you to make use of multithreading.
Yours is a basic java code. In real time scenarios, these basic java code should not be deployed on server side and we can't expect
secuirty out of this.
To add the security check to this java code, many APIs come as part of Spring framework but since we are writing java code then we can
make use of java NIO package only, API name WatchService and WatchEvent
class DirectoryWatchTest {
public static void main(String[] args) {
try {
WatchService watchService = FileSystems.getDefault().newWatchService();
Path path = Paths.get("C:/");
/**
* The register() method of the Path class takes a WatchService object and an event type for which the
* application needs to get notified.
*
* The supported event types are:
* ENTRY_CREATE: indicates if a directory or file is created.
* ENTRY_DELETE: indicates if a directory or file is deleted.
* ENTRY_MODIFY: indicates if a directory or file is modified.
* OVERFLOW: indicates if the event might have been lost or discarded. This event is always implicitly
* registered so we don't need to explicitly specify it in the register() method. */
path.register(watchService, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY);
while (true) {
WatchKey key;
try {
key = watchService.take();
} catch (InterruptedException ex) {
return;
}
/**
* The whole work flow:
* A Watchable object is registered with a watch service by invoking its register method,
* returning a WatchKey to represent the registration.
*
* When an event for an object is detected, the key is signalled, and if not currently signalled,
* it is queued to the watch service so that it can be retrieved by consumers that invoke the poll or
* take methods to retrieve keys and process events.
*
* pollEvents List<WatchEvent<?>> pollEvents() method retrieves and removes all pending events for
* this watch key, returning a List of the events that were retrieved. Note that this method does not
* wait if there are no events pending. */
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
#SuppressWarnings("unchecked")
WatchEvent<Path> ev = (WatchEvent<Path>) event;
Path fileName = ev.context();
System.out.println(kind.name() + ": " + fileName);
if (kind == ENTRY_MODIFY && fileName.toString().equals("DirectoryWatchTest.java")) {
System.out.println("My source file has changed!!!");
System.out.println("My source file has changed!!! - Modified");
}
}
/**Once the events have been processed the consumer invokes the key's reset method to reset the
* key which allows the key to be signalled and re-queued with further events.*/
boolean valid = key.reset();
if (!valid) {
break;
}
}
} catch (IOException ex) {
System.err.println(ex);
}
}
}
This kind of basic security check can be put in java code. The user will be able to watch the url unless and until we don't get
hold of protocol and hide it via #PutMapping or implementing security based API's in this but for that we need framework based API's
enter code here

Name html blob urls for easy reference

Within our web application we load a lot of content from package files (zipped packages containing html, js, css, images and so on.) The module loader (client side JS) processes the packages and makes the content available to the DOM using blob urls.
While this works very nice, it's sometimes tedious to find the right piece of JavaScript file for debugging.
IE: in chrome in the development console->sources all blob urls are listed under (no domain) and have random names such as:
blob:https://example.com/0613efd7-6977-4872-981f-519eea0bc911
In a normal production environment there are roughly 100 lines like this, so finding the right one might take some time.
I'd pretty much like to name the blob urls, or do something to make them easier to find for debugging purposes. This seems possible since WebPack is doing something like this, however i can't seem to find how. Is there anybody that can hint me in the right direction.
Ok, the way I would do it is have some global that keeps a track of the URL's, using a simple reverse map.
One problem of course with this is that references to a blob that no longer exists will be kept in memory, but if say you was only enabling this for debugging purposes this might not be a problem.
var namedblobs = {};
function addNamedBlob(name, uri) {
namedblobs[uri] = name;
}
function getNamedBlob(uri) {
return namedblobs[uri];
}
function createSomeBlob() {
//for testing just a random number would do
return Math.random().toString();
}
var blob = createSomeBlob();
addNamedBlob("test1", blob);
addNamedBlob("test2", createSomeBlob());
console.log(getNamedBlob(blob)); //should be test1
Finally i have found a solution that works to my liking. For our application we already used a serviceworker which has caching active. So i ended up writing the module files into the serviceworker cache whenever somebody has debug mode turned on.
Since the url portion of the resource files is static this way, all the nice browser features such as breakpoints are now useable again.
Below i've posted the relevant code of the serviceworker. The rest of the code is just plain serviceworker caching.
api.serveScript = function(module, script, content){
try{
content = atob(content);
} catch(err){}
return new Promise(function(resolve, reject){
var init = {
status: 200,
statusText: "OK",
headers: {'Content-Type': 'text/javascript'}
};
caches.open("modulecache-1").then(function(cache) {
console.log('[ServiceWorker] Caching ' + module + "/" + script);
cache.put("/R/" + module + "/script/" + script, new Response(content, init));
resolve("/R/" + module + "/script/" + script);
});
});
}
Thanks for your answers and help. I hope this solution is going to help some others too.
#Keith's option is probably the best one. (create a Map of your blobURIs and easy to read file names).
You could also do a dynamic router that will point some nice url to the blobURIs, but if you are open to do this, then just don't use blobURIs.
An other hackish workaround, really less cleaner than the Map, would be to append a fragment identifier to your blobURI blob:https://example.com/0613efd7-6977-4872-981f-519eea0bc911#script_name.js.
Beware, This should work for application/javascript Blobs or some other resource types, but not for documents (html/svg/...) where this fragment identifier has a special meaning.
var hello = new Blob(["alert('hello')"], {type:'application/javascript'});
var script = document.createElement('script');
script.src = URL.createObjectURL(hello) + '#hello.js';
document.head.appendChild(script);
console.log(script.src);
var css = new Blob(["body{background:red}"], {type:'text/css'});
var style = document.createElement('link');
style.href = URL.createObjectURL(css) + '#style.css';
style.rel = 'stylesheet';
document.head.appendChild(style);
console.log(style.href);
And as a fiddle for browsers which doesn't like null origined StackSnippet's iframes.

how do i export data as m3u8 file?

I want to download and play m3u8 file which is on server machine. I am using following code to read and send m3u8 file to web server.
Browser is displaying contents of file instead of downloading it.
So please let me know that, how to download it.
if ((exportHandle = fopen(v3FileName, "a+")) != NULL) {
long end = 0, start = 0, pos = 0;
char* m3u8FileDataBuff = NULL;
fseek(exportHandle, 0, SEEK_END);
end = ftell(exportHandle);
fseek(exportHandle, 0, SEEK_SET);
start = ftell(exportHandle);
pos = end - start;
m3u8FileDataBuff = (char *) malloc(pos);
end = 0;
start = 0;
fread(m3u8FileDataBuff, 1, pos, exportHandle);
pClienCommunication->writeBuffer(m3u8FileDataBuff, pos);
free(m3u8FileDataBuff);
fclose(exportHandle);
}
Client's web browser is displaying the content, because the MIME type of the response is either nil, or something like "text/plain". Set up the http response header properly to indicate mime type of m3u8 file (application/x-mpegURL or vnd.apple.mpegURL).
The piece of code you provided does not seem to set anything around response header, just content.
Check available API of pClienCommunication->, or place where that originates, what are your options to adjust response header.
Or maybe it's possible to work-around this also by some rule set up in the web server serving the response, to set the MIME type for certain URLs, or based on the response content (but applying such rules on web server level is usually more costly then adjusting the response while being created in the C++ part).
And why is this tagged C++, when the code itself is C-like with all the problems of it. In modern C++ you never do things like "fclose(..)", because that is done in the destructor of the file wrapper class, so you don't risk the fclose will be skipped in case of some exception raised in fread, etc.
So in modern C++ these things should look somewhat like this:
{
SomeFileClass exportFile(v3FileName, "a+");
if (exportFile.isOK()) {
SomeFileContentBuffer data = exportFile.read();
pClienCommunication->writeBuffer(data.asCharPtr(), data.size());
}
}
So you can't forget to release any file handle, or buffer memory (as the destructors of particular helper classes will handle that).

Call a Java Function using Browser's Client Side JavaScript

Good morning!
I have been working on a client side browser based app using JavaScript that (all of a sudden) needs the capability to save and load files locally.
The saved files are plain text (.txt) files.
I have managed to get JavaScript to read existing text files. However, I am unable to find reliable information on how to create and edit the contents of these files.
Based on what I see online, I am under the impression that you can't do this with JavaScript alone.
I found out from another source that the best way to do this is outsource the file writing/editing to a Java file and let Java do the work.
I found a code snippet and tweaked it around a bit, but it is not working and I seem to be at a loss:
JAVASCRIPT
<!Doctype html>
<html>
<OBJECT ID="Test" height=0 width=0
CLASSID="CLSID:18F79884-E141-49E4-AB97-99FF47F71C9E" CODEBASE="JavaApplication2/src/TestJava.java" VIEWASTEXT>
</OBJECT>
<script language="Javascript">
var Installed;
Installed = false;
try
{
if (Test==null)
Installed = false;
else
Installed = true;
}
catch (e)
{
Installed = false;
}
alert ("Installed :- " + Installed);
TestStr = Test.SendStr("Basil");
alert (TestStr);
</script>
</html>
JAVA
import javax.swing.*;
public class TestJava {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
}
public String SendStr(String lStr)
{
return lStr + "!!!";
}
}
If someone could point me in the right direction or even just explain why this isn't working, I would appreciate it.
I believe the sandbox issue prevents all browsers from performing any and all local file writing, without an enormous amount of working around the access restrictions. It is easier to write files remotely on the server than to write them locally to the client. This is true across all browsers.
So while it may be possible to perform the load function, you cannot perform the 'save' function on the local machine.

How to bundle and access javascripts in a bho dll?

I have a BHO (browser helper object) for data mining. A lot of low level DOM manipulation is delegated to javascript. Till now my application was picking it up from the application installation directory; but now because of some client requirement, I have to bundle the JS in the BHO dll itself.
Now my problem is that I haven't figured out how to add a JS file in my resource file (a.k.a rc file). I tried adding a HTML file (which is supported in visual studio 2008 IDE). But I fail to find html resource when I do something like this (g_hInstance is the HINSTANCE of my BHO):
if(!g_hInstance)
{
::MessageBox(NULL, L"Fail 0", L"", MB_OK);
return;
}
HRSRC hRsrc = FindResource( g_hInstance, MAKEINTRESOURCE(IDR_JS), RT_HTML );
if(!hRsrc)
{
::MessageBox(NULL, L"no point", L"", MB_OK);
return;
}
DWORD dwFSz = SizeofResource( g_hInstance, hRsrc );
HGLOBAL hHtml = LoadResource( g_hInstance, hRsrc );
LPVOID pHtml = LockResource( hHtml );
HANDLE hFHtm = CreateFile( L"c:\\temp\\Test1.htm", GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, 0, NULL );
DWORD dwWr;
WriteFile( hFHtm, pHtml, dwFSz, &dwWr, NULL );
CloseHandle( hFHtm );
UnlockResource( hHtml );
ShellExecute( NULL, L"open", L"c:\\temp\\Test1.htm", NULL, NULL, 0 );
My questions are:
Is it possible to add javascript in Visual C++ resource file (i.e. in any Dll)? If yes then how to add it and access it.
If html is allowed in a .rc file then why FindResource( g_hInstance, MAKEINTRESOURCE(IDR_JS), RT_HTML ); always gives me NULL?
Thanks,
Got it working. Steps followed:
Right click on your application's resource and click Add Resource...
Opens a nice looking dialog. There choose the custom resource button.
Provide a simple and intuitive name, like, RT_MYSCRIPT
It will open up an editor. Copy paste your script code there.
Build your solution and you are good to go.
Code to access your resource
void CTest::ReadResource()
{
if (NULL != g_hInstance) // g_hInstance is HINSTANCE of my DLL
{
HRSRC hRes = FindResource(g_hInstance, MAKEINTRESOURCE(IDR_SCRIPT), _T("RT_MYSCRIPT"));
if (NULL != hRes)
{
HGLOBAL hgbl = LoadResource(g_hInstance, hRes);
void * pScript = LockResource(hgbl);
UINT32 cbScript = SizeofResource(g_hInstance, hRes);
if(pScript)
{
// Do something
}
// pScript now points to the contents of your your .script file
// and cbScript is its size in bytes
}else
{
::MessageBox(NULL, L"Failed", L"", MB_OK);
}
/*
Don't free the library until you are done. And do it only if you
are loading the script from a resource dll or some other external
source !! Note: Also do a good amount of exception checking in your code!!
*/
// FreeLibrary(hMod);
}
}
Note:
My problem was including and accessing my javascript files from a DLL. Which I have resolved. The HTML issue is still there, but not related to my problem. I will update about it if I get a chance to use it in future.

Categories

Resources