Can't parse any data from a txt file (not a csv for a reason) when it's uploaded to a server because all the newline characters a apparently gone. d3.js parser that I'm using parseRows does not work properly without them.
On a localserver everything seems to be fine.
d3.text('fileName.txt', 'text/plain', function(fileContent) {
console.log(/\n/.test(fileContent));
});
[localserver]: true
[onlineserver]: false
Using free hosting on Hostinger, Apache server according to Wappalyzer. Don't know much about it.
Tried different encodings. No luck.
Update:
I downloaded the txt back from the server and opened it in Sublime Text. No newline characters in it. The exact local copy is fine.
Solved by avoiding: Decided to save some time and nerve and uploaded my txts to Dropbox. In case someone has same problems, here is a little trick to get direct links to Dropbox files http://techapple.net/2014/04/trick-obtain-direct-download-links-dropbox-files-dropbox-direct-link-maker-tool-cloudlinker/
Also solved by berserking: Changing the extension of the file (to csv for example) also helps, lol
Your server is probably trying to sanitize the strings it receives from the UI in order to prevent things like cross-site attacks.
Trying to escape the string you send to the server with encodeUri(str) and if you need to decodeUri(decodedStr)
Related
On my dev site the javascript for maps in the Venue section and the jQuery for the Nav scroll and scroll to top all work fine. http://yogadham.4pixels.co.uk
Upload all the same files to the actual server it's going to sit on and no javascript/jquery works!
http://yogadham.co.uk/xxindex.html
All links are to root so no problem there. I've checked permissions on the .js files. Is this a server issue? Both are Linux. Has anyone had similar issues?
Short Answer: Use Binary mode instead of Ascii mode when transferring files between Windows and Linux via FTP
Long Answer
It seems to be re-encoding your file (index.html) into a single line, probably upon the FTP upload and therefore the comments are causing section5 to break the JavaScript
//remove all comments (temporarily), and confirm if the website works
Edit: The FTP transfer is the issue: Reference
If you are transferring files from Windows to a Unix based server,
Ascii mode will strip out the CR (carriage return) characters found at
the end of each line. You may notice that the file you uploaded is
smaller than your local file. This is completely normal and is
nothing to worry about.
In your case, the CR's are causing the file to break.
Files are not the same. Take a look to the HTML generated by both. In production you don't have CR (carriage return) neither LF (line feed) characters :
https://www.diffchecker.com/pks371y6
Writing an HTTP server in Ruby, I need to edit a file in the browser which uses certain source code (HTML, JavaScript and Ruby). I need to put any text file content in the value of a textarea:
"<textarea>__CONTENT__</textarea>".gsub('__CONTENT__',File.read(filename))
However, this doesn't work if the files contain some special sub-trings, such as </textarea>. So I tried to 'prepare' the data, by doing certain replacements in the file content. However, there is an issue if the file contains source code with HTML/Ruby content, and especially if I try to send the source of my HTTP server. This chain of replacements seem good:
File.read(__FILE__).gsub(/&/,"&").gsub('<',"&"+"lt;").gsub('>',"&"+"gt;")
However, this is not good enough. There is an issue (in the web browser) when the file contains \'! Is there a useful technique to place any text in the textarea (server side and/or browser side)?
CGI::escapeHTML will "prepare" strings to be HTML-safe.
# require 'cgi'
CGI::escapeHTML(File.read(__FILE__))
this form is good:
CGI::escapeHTML(File.read(FILE))
except for the backslash character : double backslash become simple.
I found that :
Server side, replace backslash with &99992;
CGI::escapeHTML(File.read(#uri).gsub('\\','&9999'+'2222;'))
Browser side, replace in textarea "&99992222;" by backslash character :
var node=document.getElementById('textarea_1');
node.value=node.value.replace(/\&9{4,4}2{4,4};/g,String.fromCharCode(92));
Hopping that there is no sources with &99992222; !
I'm making an HTML5/jQuery/PHP app which involves uploading CSV files (via a drag and drop), and processing them to a MySQL database. So far, I can upload the files, and I know how to read them into a database.
My question: I am wondering if it is possible to detect whether a CSV file is in a corrupted format by PHP or Javascript/jQuery? For example, I can rename somefile.png (an image) to somefile.csv, and it still gets uploaded. If I open up the renamed file in Notepad++, all I see is garbage, which is expected.
I would like to do this on the clientside, so I can alert the user (via JQuery) whether the file is in a corrupted format. I'd also like to check on the serverside (via PHP) before I start iterating over each CSV file for db processing.
My first thoughts would be to use regular expressions, but I am unsure how to make ones for this particular problem. I know the basics of regular expressions, but haven't really used them in advanced settings before.
First of all you should check content-type of picked file, it should be "text/csv". At the server-side you can check file via fgetscsv PHP function (http://php.net/manual/function.fgetcsv.php) (catch null or false on error)
You don't want to be validating it if you're going to be reading it right after. Just read it in and catch any errors as you read. That way you come to know whether file is valid or corrupt.
OK, I have a web app that uses PHP, MySQL and JavaScript. In an input box, you type something and if the user types in words using Korean/Chinese/Japanese then it will be messed up.
It appears like this: ヘビーãƒãƒ¼ãƒ†ãƒ¼ã‚·ãƒ§ãƒ³.
It uses a AJAX call and passes through JavaScript wrapped around in encodeURIComponent(), so maybe that's it? I don't know. In the MySQL database it shows messed up, too!
My charset encoding on my webpage is iso-8859-1. Help?
My charset encoding on my webpage is iso-8859-1
That won't work. You need to upgrade to UTF-8 for non-European languages.
So i'm very new to xml to javascript so i thought I would learn from w3schools, but this site
http://www.w3schools.com/xml/xml_to_html.asp shows an example that I can't mimic locally. I copy/pasted the .js and downloaded the xml but I just get a blank screen!
It's working in there try it yourself but not for me? Do I need it on a server or something?
Yes, that code retrieves the XML data from a web server using AJAX. Since you don't have a server running locally, you can change the URL to point directly to the w3school's version:
xmlhttp.open("GET","http://www.w3schools.com/xml/cd_catalog.xml",false);
Alternatively, play around on their online version ;)
well i guess you have to add the example xml (cd_catalog.xml) to your file system. and you definitively have to access the html file on a server (apache i.e.)
First, ensure that both HTML file (with the Javascript block in it) and XML file are placed in the same directory.
Next, you probably need to place those files under your local web-server and open the HTML like this:
http://[local server host]/ajax.html
instead of opening the file directly from e.g. Windows Explorer:
C:\[path to the file]\ajax.html
For the latter case you'll get an "Access is denied" error.
-- Pavel
Are you running this under a web server or just creating a couple of text files and loading them in your browser?
The "GET" request this relies upon could possibly be failing.
Use Apache or another similar HTTP server and run the example as if it were hosted on the web.