Add attachment by url to Outlook mail - javascript

The context
There is a button on the homepage of each document set in a document library on a SharePoint Online environment. When the button is clicked, an Outlook window opens with the title and body set and all the files in the document set should be added as the attachments.
The code
Here's the code I have so far:
var olApp = new ActiveXObject("Outlook.Application");
var olNs = olApp.GetNameSpace("MAPI");
var olItem = olApp.CreateItem(0);
var signature = olItem.HTMLBody;
signature.Importance = 2;
olItem.To = "";
olItem.Cc = "";
olItem.Bcc = "";
olItem.Subject = "Pre filled title";
olItem.HTMLBody =
"<span style='font-size:11pt;'>" +
"<p>Pre filled body</p>" +
"</span>";
olItem.HTMLBody += signature;
olItem.Display();
olItem.GetInspector.WindowState = 2;
var docUrl = "https://path_to_site/Dossiers/13245_kort titel/New Microsoft Word Document.docx";
olItem.Attachments.Add(docUrl);
The Problem
When I run this code, an Outlook window opens with everything set correctly. But on the line where the attachment is added I get following very vague error message:
SCRIPT8: The operation failed.
I thought it could be the spaces in the url so I replaced them:
docUrl = docUrl.replace(/ /g, "%20");
Also didn't work (same error) and providing all parameters like this also didn't work:
olItem.Attachments.Add(docUrl, 1, 1, "NewDocument");
Passing a path to a local file (e.g. C:/folder/file.txt) or a publicly available url to an image does work. So my guess is it has something to do with permissions or security. Does anybody know how to solve this?
PS: I know using an ActiveX control is not the ideal way of working (browser limitations, security considerations, ...) but the situation is what it is and not in my power to change.

You cannot pass a url to MailItem.Attachments.Add in OOM (it does work in Redemption - I am its author - for RDOMail.Attachments.Add). Outlook Object Model only allows a fully qualified path to a local file or a pointer to another item (such as MailItem).

Related

What's the best method to EXTRACT product names given a list of SKU numbers from a website?

I have a problem.
I have a list of SKU numbers (hundreds) that I'm trying to match with the title of the product that it belongs to. I have thought of a few ways to accomplish this, but I feel like I'm missing something... I'm hoping someone here has a quick and efficient idea to help me get this done.
The products come from Aidan Gray.
Attempt #1 (Batch Program Method) - FAIL:
After searching for a SKU in Aidan Gray, the website returns a URL that looks like below:
http://www.aidangrayhome.com/catalogsearch/result/?q=SKUNUMBER
... with "SKUNUMBER" obviously being a SKU.
The first result of the webpage is almost always the product.
To click the first result (through the address bar) the following can be entered (if Javascript is enabled through the address bar):
javascript:{document.getElementsByClassName("product-image")[0].click;}
I wanted to create a .bat file through Command Prompt and execute the following command:
firefox http://www.aidangrayhome.com/catalogsearch/result/?q=SKUNUMBER javascript:{document.getElementsByClassName("product-image")[0].click;}
... but Firefox doesn't seem to allow these two commands to execute in the same tab.
If that worked, I was going to go to http://tools.buzzstream.com/meta-tag-extractor, paste the resulting links to get the titles of the pages, and export the data to CSV format, and copy over the data I wanted.
Unfortunately, I am unable to open both the webpage and the Javascript in the same tab through a batch program.
Attempt #2 (I'm Feeling Lucky Method):
I was going to use Google's &btnI URL suffix to automatically redirect to the first result.
http://www.google.com/search?btnI&q=site:aidangrayhome.com+SKUNUMBER
After opening all the links in tabs, I was going to use a Firefox add-on called "Send Tab URLs" to copy the names of the tabs (which contain the product names) to the clipboard.
The problem is that most of the results were simply not lucky enough...
If anybody has an idea or tip to get this accomplished, I'd be very grateful.
I recommend using JScript for this. It's easy to include as hybrid code in a batch script, its structure and syntax is familiar to anyone comfortable with JavaScript, and you can use it to fetch web pages via XMLHTTPRequest (a.k.a. Ajax by the less-informed) and build a DOM object from the .responseText using an htmlfile COM object.
Anyway, challenge: accepted. Save this with a .bat extension. It'll look for a text file containing SKUs, one per line, and fetch and scrape the search page for each, writing info from the first anchor element with a .className of "product-image" to a CSV file.
#if (#CodeSection == #Batch) #then
#echo off
setlocal
set "skufile=sku.txt"
set "outfile=output.csv"
set "URL=http://www.aidangrayhome.com/catalogsearch/result/?q="
rem // invoke JScript portion
cscript /nologo /e:jscript "%~f0" "%skufile%" "%outfile%" "%URL%"
echo Done.
rem // end main runtime
goto :EOF
#end // end batch / begin JScript chimera
var fso = WSH.CreateObject('scripting.filesystemobject'),
skufile = fso.OpenTextFile(WSH.Arguments(0), 1),
skus = skufile.ReadAll().split(/\r?\n/),
outfile = fso.CreateTextFile(WSH.Arguments(1), true),
URL = WSH.Arguments(2);
skufile.Close();
String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g, ''); }
// returns a DOM root object
function fetch(url) {
var XHR = WSH.CreateObject("Microsoft.XMLHTTP"),
DOM = WSH.CreateObject('htmlfile');
WSH.StdErr.Write('fetching ' + url);
XHR.open("GET",url,true);
XHR.setRequestHeader('User-Agent','XMLHTTP/1.0');
XHR.send('');
while (XHR.readyState!=4) {WSH.Sleep(25)};
DOM.write(XHR.responseText);
return DOM;
}
function out(what) {
WSH.StdErr.Write(new Array(79).join(String.fromCharCode(8)));
WSH.Echo(what);
outfile.WriteLine(what);
}
WSH.Echo('Writing to ' + WSH.Arguments(1) + '...')
out('sku,product,URL');
for (var i=0; i<skus.length; i++) {
if (!skus[i]) continue;
var DOM = fetch(URL + skus[i]),
anchors = DOM.getElementsByTagName('a');
for (var j=0; j<anchors.length; j++) {
if (/\bproduct-image\b/i.test(anchors[j].className)) {
out(skus[i]+',"' + anchors[j].title.trim() + '","' + anchors[j].href + '"');
break;
}
}
}
outfile.Close();
Too bad the htmlfile COM object doesn't support getElementsByClassName. :/ But this seems to work well enough in my testing.

How to download a file with a very long URL?

Basically I'm trying to trigger the download of a file which is constructed dynamically. In order to construct the file I need to pass a lot of information to the servlet.
This means I can't use GET, I should use POST instead.
In order for the download not to mess the application up, though, I must open a new window. The problem is that I cannot do this:
window.open(http://very.long.url.com/.../, "_blank");
since this would cause the URL limit to be exceeded and GET to be used instead of POST.
All information that needs to be sent to the servlet are in an object called "config.
My solution was to write a form in the new window with POST method, then add hidden inputs to it and then trigger the submit, like this:
var win = window.open("about:blank", "_blank");
win.document.write("<form method='post' id=donwload action='serveletURL'></form>");
$.each(config, function (name, value) {
var fragment = document.createDocumentFragment(),
temp = document.createElement('div');
temp.innerHTML = '<input type=\'hidden\' name=\'' + name + '\' value=\'' + value + '\'></input>';
while (temp.firstChild) {
fragment.appendChild(temp.firstChild);
}
win.document.getElementById("download").appendChild(fragment);
});
win.document.getElementById('download').submit();
And it works in Google Chrome and Firefox, but it doesn't work on IE. On IE (even IE 10), a download dialog with a "serveletURL" file name opens and nothing gets downloaded when I ask the file to be saved or opened.
I'm sure I'm missing something... How do I accomplish this??
Thank you!
Eduardo

Issues in developing web scraper

I want to develop a platform where users can enter a URL and then my website will open the webpage in an iframe. Now the user can modify his website by simply right clicking and I will provide him options like "remove this element", "copy this element". I am almost through. Many of the websites are opening perfectly in iframe but for a few websites some errors have shown up. I could not identify the reason so asking for your help.
I have solved other issues like XSS problem.
Here is the procedure I have followed :-
Used JavaScript and sent the request to my Java server which makes connection to the URL specified by the user and fetches the HTML and then use Jsoup HTML parser to convert relative URLs into absolute URLs and then save the HTML to my disk in Java. And then I render the saved HTML into my iframe.
Is somewhere wrong ?
A few websites are working perfectly but a few are not.
For example:-
When I tried to open http://www.snapdeal.com it gave me the
Uncaught TypeError: Cannot read property 'paddingTop' of undefined
error. I don't understand why this is happening..
Update
I really wonder how this is implemented? # http://www.proxywebsites.in/browse.php?u=Oi8vd3d3LnNuYXBkZWFsLmNvbQ%3D%3D&b=13&f=norefer
2 issues, pick any you like:
your server side proxy code contains bugs
plenty of sites have either explicit frame-break code or at least expect to be top level frame.
You can try one more thing. In your proxy script you are saving your webpage on your disk and then loading into iframe. I think instead of loading the page you saved on disk in iframe try to open that page in browser. All those sites that restirct their page to be loaded into iframe will now get opened without any error.
Try this I think it an work
My Proxy Server side code :-
DateFormat df = new SimpleDateFormat("ddMMyyyyHHmmss");
String dirName = df.format(new Date());
String dirPath = "C:/apache-tomcat-7.0.23/webapps/offlineWeb/" + dirName;
String serverName = "http://localhost:8080/offlineWeb/" + dirName;
boolean directoryCreated = new File(dirPath).mkdir();
if (!directoryCreated)
log.error("Error in creating directory");
String html = Jsoup.connect(url.toString()).get().html();
doc = Jsoup.parse(html, url);
links = doc.select("link");
scripts = doc.select("script");
images = doc.select("img");
for (Element element : links) {
String linkHref = element.attr("abs:href");
if (linkHref != "") {
element.attr("href", linkHref);
}
}
for (Element element : scripts) {
String scriptSrc = element.attr("abs:src");
if (scriptSrc != "") {
element.attr("src", scriptSrc);
}
}
for (Element element : images) {
String imgSrc = element.attr("abs:src");
if (imgSrc != "") {
element.attr("src", imgSrc);
log.info(imgSrc);
}
}
And Now i am just returning the path where i saved my html file
That's it about my server code

WIX: Where and how should my CustomAction create and read a temporary file?

I have a script CustomAction (Yes, I know all about the opinions that say don't use script CustomActions. I have a different opinion.)
I'd like to run a command, and capture the output. I can do this using the WScript.Shell COM object, then invoking shell.Exec(). But, this flashes a visible console window for the executed command.
To avoid that, I understand I can use the shell.Run() call, and specify "hidden" for the window appearance. But .Run() doesn't give me access to the StdOut of the executed process, so that means I'd need to create a temporary file and redirect the exe output to the temp file, then later read that temp file in script.
Some questions:
is this gonna work?
How do I generate a name for the temporary file? In .NET I could use a static method in the System.IO namespace, but I am using script here. I need to insure that the use has RW access, and also that no anti-virus program is going to puke on this.
Better ideas? I am trying very hard to avoid C/C++.
I could avoid all this if there were a way to query websites in IIS7 from script, without resorting to the IIS6 Compatibility pack, without using .NET (Microsoft.Web.Administration.ServerManager), and without execing a process (appcmd list sites).
I already asked a separate question on that topic; any suggestions on that would also be appreciated.
Answering my own question...
yes, this is going to work.
Use the Scripting.FileSystemObject thing within Javascript. There's a GetTempName() method that produces a file name suitable for temporary use, and a GetSpecialFolder() method that gets the location of the temp folder. There's even a BuildPath() method to combine them.
so far I don't have any better ideas.
Here's the code I used:
function GetWebSites_IIS7_B()
{
var ParseOneLine = function(oneLine) {
...regex parsing of output...
};
LogMessage("GetWebSites_IIS7_B() ENTER");
var shell = new ActiveXObject("WScript.Shell");
var fso = new ActiveXObject("Scripting.FileSystemObject");
var tmpdir = fso.GetSpecialFolder(SpecialFolders.TemporaryFolder);
var tmpFileName = fso.BuildPath(tmpdir, fso.GetTempName());
var windir = fso.GetSpecialFolder(SpecialFolders.WindowsFolder);
var appcmd = fso.BuildPath(windir,"system32\\inetsrv\\appcmd.exe") + " list sites";
// use cmd.exe to redirect the output
var rc = shell.Run("%comspec% /c " + appcmd + "> " + tmpFileName, WindowStyle.Hidden, true);
// WindowStyle.Hidden == 0
var ts = fso.OpenTextFile(tmpFileName, OpenMode.ForReading);
var sites = [];
// Read from the file and parse the results.
while (!ts.AtEndOfStream) {
var oneLine = ts.ReadLine();
var line = ParseOneLine(oneLine);
LogMessage(" site: " + line.name);
sites.push(line);
}
ts.Close();
fso.DeleteFile(tmpFileName);
return sites;
}

Collect DOM elements from external HTML documents

I am trying to write a report-generator to collect user-comments from a list of external HTML files. User-comments are wrapped in < span> elements.
Can this be done using JavaScript?
Here's my attempt:
function generateCommentReport()
{
var files = document.querySelectorAll('td a'); //Files to scan are links in an HTML table
var outputWindow = window.open(); //Output browser window for report
for(var i = 0; i<files.length; i++){
//Open each file in a browser window
win = window.open();
win.location.href = files[i].href;
//Scan opened window for 'comment's
comments = win.document.querySelectorAll('.comment');
for(var j=0;j<comments.length;j++){
//Add to output report
outputWindow.document.write(comment[i].innerHTML);
}
}
}
You will need to wait for onload on the target window before you can read content from its document.
Also what type of element is comment? In general you can't put a name on just any element. Whilst unknown attributes like a misplaced name may be ignored, you can't guarantee that browsers will take account of them for getElementsByName. (In reality, most browsers do, but IE doesn't.) A class might be a better bet?
Each web browse works in a defined and controlled work space on a user computer where certain things are restrict to code like file system - these are safety standards to ensure that no malicious code from internet runs into your system to phishing sensitive information stored on in it. Only ways a webbrowser is allowed if access granted explicitly by the user.
But i can suggest you for Internet Application as
- If List of commands is static then cache either by XML, Json or Cookies [it will store on user's system until it expires]
- If dynamic then Ajax to retrieve it
I think I have the solution to this.
var windows = [];
var report = null;
function handlerFunctionFactory(i,max){
return function (evt){
//Scan opened window for 'comment's
var comments = windows[i].document.querySelectorAll('.comment');
for(var j=0;j<comments.length;j++){
//Add to output report
report.document.write(comments[j].innerHTML);
}
if((i+1)==max){
report.document.write("</div></body></html>");
report.document.close();
}
windows[i].close();
}
}
function generateReport()
{
var files = document.querySelectorAll('td a'); //The list of files to scan is stored as links in an HTML table
report = window.open(); //Output browser window for report
report.title = 'Comment Report';
report.document.open();
report.document.write('<!DOCTYPE html PUBLIC"-// W3C//DTD XHTML 1.0 Transitional//EN"" http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">'
+ '<html><head><title>Comment Report</title>'
+ '</head><body>');
for(var i = 0; i<files.length; i++){
//Open each file in a browser window
win = window.open();
windows.push(win)
win.location.href = files[i].href;
win.onload = handlerFunctionFactory(i,files.length);
}
}
Any refactoring tips are welcome. I am not entirely convinced that factory is the best way to bind the onload handlers to an instance for example.
This works only on Firefox :(

Categories

Resources