Client side equivalent of PHP require_once - javascript

I like to replicate header/footer content using require_once at the top of my document like so:
<?php require_once( "SNIPPETS/HEADER.php" ); ?>
Where the snippet header.php has everything from my <!DOCTYPE> and opening <html> to my page header in the <body> (navigation, logo, etc.) And I do similar for the footer. It is immensely helpful with updating multipage sites.
I am working on a project for a small company that uses a sales/web platform that is fairly restrictive and does not support PHP, or any server-side scripting. The thing is the website is actually going to be fairly complex and may need revisions, so I want to use this methodology if at all possible.
I am stuck with HTML/CSS/JS. Is there any function or workaround that I can use to do this?
I was thinking I might be able to have an externally hosted snippet db file (xml or json) that I can call and read with js, and then do an innerHTML or outerHTML replacement of the <head></head>, <div class='header'></div>, and <footer></footer> tags.
But that seems maybe a tad inelegant, so I was wondering if anyone else had a similar problem with a better solution?

Going strictly by your title, you can emulate a require_once() in Javascript. I'm using JQuery, and my function looks something like this:
var loadedScriptUrls = [];
function loadJavascriptOnce(url)
{
// do it only when not loaded before
if (loadedScriptUrls.indexOf(url) < 0) {
$.getScript(url)
.done(function (script, textStatus) {
// remember the loaded script
loadedScriptUrls.push(url);
})
.fail(function(jqXHR, textStatus, error) {
alert("loadJavascriptOnce(): Script [" + url + "] error: " + error);
});
}
}
Any script can be loaded like this:
loadJavascriptOnce("https://js/my_script.js");
Remember to always use the same type of URL, for instance always the full URL.
Note that this is all written in Javascript.

Related

How to use JavaScript or/and PHP, to detect a website/page being stolen/cloned and then redirect reader back to my website

I found hundreds of cloned versions of my website.
Whoever is doing that are using some code that clones my web pages, changes my website name mydomain.com to clone1.com, clone2.com, clone3.com etc and this makes it impossible to use a simple JS or PHP to check if the header URL is = to mysite.com then redirect.
It also does not work using the .htaccess
For this reason I have created this code:
<script type="text/javascript">
if (window.location.href== "http://clone1.com/cat1/{{{ $title->id }}}-{{ (Str::slug($title->title)) }}/cat2/{{ $se->n }}/cat3/{{ $episode->ep_n }}")
{
window.location.href = 'http://google.com/';
}
</script>
This script completes its purpose but is too long and is also very restrictive because it must contain the exact URL.
I'm looking to do this:
<script type="text/javascript">
if (window.location.href== "http://
//contains this part in its URL
clone1.com , clone2.com , clone3.com , clone4....
}}")
{
window.location.href = 'http://google.com/';
}
</script>
How can I create a global JS (JavaScript), that would detect if the current page is not on my domain and then redirect the reader to my domain and the same page?
Many thanks
1. Best Solution - Early Detection
Depending on your main traffic source, it is possible to detect who is scrapping you and block them based on their IP, Headers, number of page views and other data, using PHP & HTACCESS.
I really like this answer on the StackOverflow, that discusses almost all the options available for early detection.
How to detect fake users ( crawlers ) and cURL
2. Plugins & Extensions for Open Source Content Management Systems
Wordpress
If using Wordpress CMS, you can try some plugins, like WordFence, that can detect and block fake Google Crawlers, block based on the number of page views etc.
Other CMS
If you can't find a similar solution for your CMS of choice, consider to ask a community for a help with creating the solution like that, as I believe many people could benefit from it.
3. Solution for already stolen content with JavaScript
Sometimes the easiest road to hide something in JS, is to actually HIDE something by OBFUSCATING and by hiding in multiple important files. For example, obfuscate some important file on your website without which the website just wouldn't work properly.
For example, put an obfuscated version of the code below somewhere in JS file in the header, Obfuscate this code using any free services online or download your own library on Github:
Non-Obfuscated:
w='mysite.com'; // Current URL e.g. 'mysite.com/category1/page2/'
function check_origin(){
var check = 587;
if(window.location.hostname != w){
window.location.href = w;
}
return check;
}
var check = check_origin();
Obfuscated example:
var _0x303e=["\x6D\x79\x73\x69\x74\x65\x2E\x63\x6F\x6D","\x68\x6F\x73\x74\x6E\x61\x6D\x65","\x6C\x6F\x63\x61\x74\x69\x6F\x6E","\x68\x72\x65\x66"];w= _0x303e[0];function check_origin(){var check=587;if(window[_0x303e[2]][_0x303e[1]]!= w){window[_0x303e[2]][_0x303e[3]]= w};return check}var check=check_origin()
Now put an additional code in some Footer JS File, to verify the code above wasn't modified in any way:
Non-Obfuscated example:
if(w!=='mysite.com'||check == false || typeof check == 'undefined' || check !== 587 ){
window.location.href = 'mysite.com';
}
Obfuscated:
var _0x92bb=["\x6D\x79\x73\x69\x74\x65\x2E\x63\x6F\x6D","\x75\x6E\x64\x65\x66\x69\x6E\x65\x64","\x68\x72\x65\x66","\x6C\x6F\x63\x61\x74\x69\x6F\x6E"];if(w!== _0x92bb[0]|| check== false|| typeof check== _0x92bb[1]|| check!== 587){window[_0x92bb[3]][_0x92bb[2]]= _0x92bb[0]}
I have used free online service from Google's search results for the term "Free Online JS Obfuscator:
https://javascriptobfuscator.com/Javascript-Obfuscator.aspx
4. Fight thieves with available methods e.g. Request a Ban from Search Engines – The Digital Millennium Copyright Act of 1998
Here is a blog-post that describes what to do when someone is stealing your content.
https://lorelle.wordpress.com/2006/04/10/what-do-you-do-when-someone-steals-your-content/
You can investigate who is doing that and report them to their partners, search engines, advertisers - to disrupt their business.
Depending on their country of origin and yours, it is maybe even possible to sue them and win.
why not check if hostname is your ?
if(window.location.hostname != 'mysite.com'){
window.location.href = 'http://google.com/';
}

CQ: Why does jquery add /ajax to the start of my web service url?

I have written a little servlet that outputs data in the form of an RSS feed. It's running on my webserver at /services/rss.servlet and is returning data nicely.
In my webpage I am attempting to load data from the rss servlet like so:
$(document).ready(function() {
$.get("/services/rss.servlet")
.done(function(data) {
console.log("Success: " + data);
})
.fail(function( jqxhr, textStatus, error ) {
var err = textStatus + ", " + error;
console.log( "Request Failed: " + err );
});
});
MOST of the time, this works fine and I get data. But every now and then, the request fails and I see the following request in my network debugging page:
Request GET /ajax/services/rss.servlet HTTP/1.1
Why am I seeing /ajax prepended to my URL? It seems completely undocumented in JQuery. In particular, I notice this behavior all the time in IE9 with quirks mode, but not in IE9 with standard browser mode.
I discovered that this problem was unique to a "feature" of Adobe CQ/WEM.
In a "normal" layout, CQ would have the following subdirectories publicly exposed:
www.example.com/apps
www.example.com/libs
www.example.com/etc
However, CQ contains code allowing it to be hosted in a relative URL deeper than the document root, in case your directory structure happened to be...
www.example.com/subdirectory/cqRoot/apps
www.example.com/subdirectory/cqRoot/etc
www.example.com/subdirectory/cqRoot/libs
The code that supports this is all client-side javascript which scans the <head> for these directories and figures out where CQ "ought to be".
In our case, one of the first items in our <head> was <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script>, and CQ thought this meant the application was stored under /ajax.
There is a configuration to defeat this feature starting in CQ 5.6 from the Felix console. http://help-forums.adobe.com/content/adobeforums/en/experience-manager-forum/adobe-experience-manager.topic.685.html/forum__omci-we_are_on_cq54.html
You can also force it to be disabled in your <head>, so that it does not auto-detect.
<script>
window.CQURLInfo = window.CQURLInfo || {};
CQURLInfo.contextPath = "";
</script>

Search for injected script code on the Windows Server for my ASP.NET site

My site was probably hacked. I am finding script.js from bigcatsolutions.com in my page. It triggers a popup of an affiliate program. The script isn't on the page by default and I want to know how can I find where it was injected. The script sometimes injects other ad sites.
In chrome I see this:
The injected script code:
function addEvent(obj, eventName, func) {
if (obj.attachEvent) {
obj.attachEvent("on" + eventName, func);
} else if (obj.addEventListener) {
obj.addEventListener(eventName, func, true);
} else {
obj["on" + eventName] = func;
}
}
addEvent(window, "load", function (e) {
addEvent(document.body, "click", function (e) {
if (document.cookie.indexOf("booknow") == -1) {
params = 'width=800';
params += ', height=600';
params += ', top=50, left=50,scrollbars=yes';
var w = window.open("http://booknowhalong.com/discount-news", 'window', params).blur();
document.cookie = "booknow";
window.focus();
}
});
})
My site is moved from my hosting company to Amazon EC2 Windows 2013 Server and still have the issues, so it means that the code still resides on the server somewhere. My site was build using ASP.ENT / C#.
Things I did:
tried to search the original aspx and aspx.cs code files
Have you checked the IIS logs to see if they are hitting a specific page and injecting it there?
Do you load any data from a database? You could check in the tables and see if anything out of the ordinary appears there.
It is unlikely that the .aspx pages have actually been physically modified and even more unlikely that the DLL have been as .aspx.cs files are compiled in to your BIN folder as DLL's. The more likely scenario is that you have an unsecure page that a malicious site is injecting its script into. The other possible attack vector is that you have had malicious code via SQL injection and are loading it each time.
After deep searching and I missed it in the first run, I found that the script was injected into the ASP.NET masterpage.
I ran a search to search for a specific string in all the files and that's how I found it. It seems that the server itself was breached and the hacker put the code into several websites.
So for those of you who have this type of problem, I recommend running a text search and try to find the URL that is tights to the running script.
Hope that helps and thanks for your time.

How do I load an external file and make sure that it runs first in JSFiddle?

I have a JsFiddle here, and added Microsoft AJAX to be loaded through external JS/resource section. How can I tell whether or not my JS code is run after the AJAX file has finished loading?
Seems that the AJAX does not load either. :(
Here is the code in the JSFiddle:
Type.registerNamespace("Tutorial.Chapter1");
Tutorial.Chapter1.Person = function(firstName, lastName) {
this._firstName = firstName;
this._lastName = lastName;
};
Tutorial.Chapter1.Person.prototype = {
set_firstName: function(value) {
this._firstName = value;
},
get_firstName: function() {
return this._firstName;
},
set_lastName: function(value) {
this._lastName = value;
},
get_lastName: function() {
return this._lastName;
},
_firstName: "",
_lastName: "",
displayName: function() {
alert("Hi! " + this._firstName + " " + this._lastName);
}
};
Tutorial.Chapter1.Person.registerClass("Tutorial.Chapter1.Person", null);
The External Resources tab of jsFiddle is currently somewhat tricky and unstable to use.
The resources defined here are often not correctly included into the code. There seems to be an issue with the automatic recognition of JS and CSS resources. If this happens, the external resource is simply not added to the head section of the resulting code. You can check that by reviewing the source code of the Result frame of your jsFiddle. You will find that your MS AJAX resource is simply NOT mentioned in the resulting HTML code.
The correct recognition can actually be forced by adding a dummy value to the resource's URL like this (see –>jsFiddle docs for more info):
...&dummy=.js
Here is an example that shows how to add the external Google Maps API resource to a jsFiddle (mind the dummy parameter at the very end!):
https://maps.googleapis.com/maps/api/js?sensor=false&dummy=.js
Unfortunately this won't work for you as the MS AJAX URL will fail when additional parameters are appended.
A solution (and currently the safest way to load external resources) is to avoid the External Resources tab altogether and load external code manually in the first line(s) of jsFiddle's HTML window like this:
<script type='text/javascript' src="http://ajax.aspnetcdn.com/ajax/3.5/MicrosoftAjax.js"></script>
Here is your jsFiddle modified to use that method: http://jsfiddle.net/rEzW5/12/
It actually does not do a lot (I did not check what is wrong with the rest of your code), but at least it does not throw JavaScript errors anymore.
Open "Add Resources" section and add the url of your external script...
#Jpsy's approach no longer seems to work (see my comment under his answer).
For me, adding the resource under External Resources also didn't work. (According to the Firefox Debugger, it couldn't find the resource).
The only way I was able to get an external bit of JavaScript code (in my case jquery.backstretch.js) to work, was to use Google to find a Fiddle which used this resource (and worked), then Fork this Fiddle and copy/paste all my code into the HTML, CSS and JavaScript panels. Ugh!
#clayRay, You absolutely went thru a code surgery. Just resolved that by mentioning external source in plain html which in my case is
<script src="https://code.jquery.com/jquery-2.2.4.min.js"></script>
Using External Resources tab didn't help a bit...

How to get data with JavaScript from another server?

How can I make requests to other server(s) (i.e. get a page from any desired server) with a JavaScript within the user's browser? There are limitations in place to prevent this for methods like XMLHttpRequest, are there ways to bypass them or other methods?
That is a general question, specifically I want to check a series of random websites and see if they contain a certain element, so I need the HTML content of a website without downloading any additional files; all that in a JavaScript file, without any forwarding or proxy mechanism on a server.
(Note: one way is using Greasemonkey and its GM_xmlhttpRequest.)
You should check out jQuery. It has a rich base of AJAX functionality that can give you the power to do all of this. You can load in an external page, and parse it's HTML content with intuitive CSS-like selectors.
An example using $.get();
$.get("anotherPage.html", {}, function(results){
alert(results); // will show the HTML from anotherPage.html
alert($(results).find("div.scores").html()); // show "scores" div in results
});
For external domains I've had to author a local PHP script that will act as a middle-man. jQuery will call the local PHP script passing in another server's URL as an argument, the local PHP script will gather the data, and jQuery will read the data from the local PHP script.
$.get("middleman.php", {"site":"http://www.google.com"}, function(results){
alert(results); // middleman gives Google's HTML to jQuery
});
Giving middleman.php something along the lines of
<?php
// Do not use as-is, this is only an example.
// $_GET["site"] set by jQuery as "http://www.google.com"
print file_get_contents($_GET["site"]);
?>
update 2018:
You can only access cross domain with the following 4 condition
in response header has Access-Control-Allow-Origin: *
Demo
$.ajax({
url: 'https://api.myjson.com/bins/bq6eu',
success: function(response){
console.log(response.string);
},
error: function(response){
console.log('server error');
}
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
use server as bridge or proxy to the target
Demo:
$.ajax({
url: 'https://cors-anywhere.herokuapp.com/http://whatismyip.akamai.com/',
success: function(response){
console.log('server IP: ' + response);
},
error: function(response){
console.log('bridge server error');
}
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
using browser addon to enable Allow-Control-Allow-Origin: *
disable browser web security
Chrome
chrome.exe --args --disable-web-security
Firefox
about:config -> security.fileuri.strict_origin_policy -> false
end
noob old answer 2011
$.get(); can get data from jsbin.com but i don't know why it can't get data from another site like google.com
$.get('http://jsbin.com/ufotu5', {},
function(results){ alert(results);
});
demo: http://jsfiddle.net/Xj234/
tested with firefox, chrome and safari.
Write a proxy script that forwards along the http request from your domain, this will bypass the XMLHttpRequest restrictions.
If your using PHP, simply use cURL to request and read the page, then simply spit out the html as if it was from you domain.
This is rather easy... if you know the 'secret' trick almost nobody shares..
It's called Yahoo yql...
So in order to regain 'power to the user' (and returning to the convenient mantra: 'never accept no'), just use http://query.yahooapis.com/ (instead of a php? proxy serverside script).
jQuery would not be strictly needed.
EXAMPLE 1:
Using the SQL-like command:
select * from html
where url="http://stackoverflow.com"
and xpath='//div/h3/a'
The following link will scrape SO for the newest questions (bypassing cross-domain security bull$#!7):
http://query.yahooapis.com/v1/public/yql?q=select%20title%20from%20html%20where%20url%3D%22http%3A%2F%2Fstackoverflow.com%22%20and%0A%20%20%20%20%20%20xpath%3D%27%2F%2Fdiv%2Fh3%2Fa%27%0A%20%20%20%20&format=json&callback=cbfunc
As you can see this will return a JSON array (one can also choose xml) and calling the callback-function: cbfunc.
Indeed, as a 'bonus' you also save a kitten every time you did not need to regex data out of 'tag-soup'.
Do you hear your little mad scientist inside yourself starting to giggle?
Then see this answer for more info (and don't forget it's comments for more examples).
Good Luck!
You can also use a iframe to emulate an ajax request. This saves you the mess of having to code a Backend solution for a Frontend problem. Here is an example:
function setUploadEvent(typeComponet){
var eventType = "";
var iframe = document.getElementById("iframeId");
try{
/* for Mozilla / Opera9 */
if (/(?!.*?compatible|.*?webkit)^mozilla|opera/i.test(navigator.userAgent)) {
eventType = "onload";
}else{
/* IE */
eventType = "onreadystatechange";
}
iframe[eventType] = function(){
var doc = iframe.contentDocument || iframe.contentWindow.document;
var response = doc.body.innerHTML; /* or what ever content you are looking for */
}
}
catch(e){
alert("Error loading content")}
}
That should do the trick. Please note that the Browser detection line is not the cleanest, you should absolutely use the ones provided in all the most common JS frameworks (Prototype, JQuery, etc)
You will need to write a proxy on the server to do this. And all requests will be to your server and then your server will load html and send it back to client. And there are no good way to implement this via javascript only.
jQuery contains functionality to load JSON data or external scripts using XmlHttpRequest but this functionality can not be used for html pages. Also you may check this thread of jQuery mailing list.
<script language="JavaScript" type="text/javascript" src="http://www.example.com/hello.js"></script>
You add the data into hello.js in as an array, JSON or similar. Example:
var daysInMonth = new Array(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31);
Getting a JavaScript from another server doesn't much simpler.. :-)
Thanks a lot, this is really a good trick. I did in this way:
test.html
<!DOCTYPE html>
<html>
<head>
<script>
function loadXMLDoc()
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","sp.php",true);
xmlhttp.send();
}
</script>
</head>
<body>
<h2>Using the XMLHttpRequest object</h2>
<div id="myDiv"></div>
<button type="button" onclick="loadXMLDoc()">Change Content</button>
</body>
</html>
sp.php
<?php
print file_get_contents("http://your.url.com/you-can-use-cross-domain");
?>

Categories

Resources