How to use Discovery XML in WOPI? - javascript

As mentioned in this document http://wopi.readthedocs.io/en/latest/discovery.html, I was wondering if there is a way to use the Action URL Dynamically?

What do you mean by "dynamically"?
When you load the discovery file, you can dynamically build action URLs by replacing placeholders such as <ui=UI_LLCC&>.
Here's my C# code which should be easily transformable to Java:
public async Task<string> GetFileUrlAsync(string extension, string wopiFileUrl, WopiActionEnum action, WopiUrlSettings urlSettings = null)
{
var combinedUrlSettings = new WopiUrlSettings(urlSettings.Merge(UrlSettings));
var template = await WopiDiscoverer.GetUrlTemplateAsync(extension, action);
if (!string.IsNullOrEmpty(template))
{
// Resolve optional parameters
var url = Regex.Replace(template, #"<(?<name>\w*)=(?<value>\w*)&*>", m => ResolveOptionalParameter(m.Groups["name"].Value, m.Groups["value"].Value, combinedUrlSettings));
url = url.TrimEnd('&');
// Append mandatory parameters
url += "&WOPISrc=" + Uri.EscapeDataString(wopiFileUrl);
return url;
}
return null;
}
Note that the WopiUrlBuilder uses WopiDiscoverer that facilitates low level operations upon the discovery file.

Related

Retrieving file data in chunks using Web API for display in browser (WIP)

I have this working but I want to share this out to see if I missed anything obvious and to solve a mystery as to why my file chunk size has to be a multiple of 2049. The main requirements are:
Files uploaded from website must be stored in SQL server, not as files
Website must be able to download and display file data as a file (opened in a separate window.
Website is angularjs/javascript SPA, no server side code, no MVC
API is Web API 2 (again not MVC)
I'm just going to focus on the download part here. Basically what I'm doing is:
Read a chunk of data from SQL server varbinary field
Web API 2 api returns file name, mime type and byte data as a base64 string. NOTE - tried returning byte array but Web API just serializes it into base64 string anyway.
concatenate the chunks, convert the chunks to a blob and display
VB library function that returns a dataset with the chunk (I have to use this library which handles the database connection but doesn't support parameter queries)
Public Function GetWebApplicationAttachment(ByVal intId As Integer, ByVal intChunkNumber As Integer, ByVal intChunkSize As Integer) As DataSet
' the starting number is NOT 0 based
Dim intStart As Integer = 1
If intChunkNumber > 1 Then intStart = ((intChunkNumber - 1) * intChunkSize) + 1
Dim strQuery As String = ""
strQuery += "SELECT FileName, "
strQuery += "SUBSTRING(ByteData," & intStart.ToString & "," & intChunkSize.ToString & ") AS ByteData "
strQuery += "FROM FileAttachments WHERE Id = " + intId.ToString + " "
Try
Return Query(strQuery)
Catch ex As Exception
...
End Try
End Function
Web API business rules bit that creates the file object from the dataset
...
result.FileName = ds.Tables[0].Rows[0]["FileName"].ToString();
// NOTE: Web API converts a byte array to base 64 string so the result is the same either way
// the result of this is that the returned data will be about 30% bigger than the chunk size requested
result.StringData = Convert.ToBase64String((byte[])ds.Tables[0].Rows[0]["ByteData"]);
//result.ByteData = (byte[])ds.Tables[0].Rows[0]["ByteData"];
... some code to get the mime type
result.MIMEType = ...
Web API controller (simplified - all security and error handling removed)
public IHttpActionResult GetFileAttachment([FromUri] int id, int chunkSize, int chunkNumber) {
brs = new Files(...);
fileResult file = brs.GetFileAttachment(appID, chunkNumber, chunkSize);
return Ok(file);
}
angularjs Service that gets the chunks recurively and puts them together
function getFileAttachment2(id, chunkSize, chunkNumber, def, fileData, mimeType) {
var deferred = def || $q.defer();
$http.get(webServicesPath + "api/files/get-file-attachment?id=" + id + "&chunkSize=" + chunkSize + "&chunkNumber=" + chunkNumber).then(
function (response) {
// when completed string data will be empty
if (response.data.StringData === "") {
response.data.MIMEType = mimeType;
response.data.StringData = fileData;
deferred.resolve(response.data);
} else {
if (chunkNumber === 1) {
// only the first chunk computes the mime type
mimeType = response.data.MIMEType;
}
fileData += response.data.StringData;
chunkNumber += 1;
getFileAttachment2(appID, detailID, orgID, GUID, type, chunkSize, chunkNumber, deferred, fileData, mimeType);
}
},
function (response) {
... error stuff
}
);
return deferred.promise;
}
angular controller method that makes the calls.
function viewFile(id) {
sharedInfo.getWebPortalSetting("FileChunkSize").then(function (result) {
// chunk size must be a multiple of 2049 ???
var chunkSize = 0;
if (result !== null) chunkSize = parseInt(result);
fileHelper.getFileAttachment2(id, chunkSize, 1, null, "", "").then(function (result) {
if (result.error === null) {
if (!fileHelper.viewAsFile(result.StringData, result.FileName, result.MIMEType)) {
... error
}
result = {};
} else {
... error;
}
});
});
}
And finally the bit of javascript that displays the file as a download
function viewAsFile(fileData, fileName, fileType) {
try {
fileData = window.atob(fileData);
var ab = new ArrayBuffer(fileData.length);
var ia = new Uint8Array(ab); // ia provides window into array buffer
for (var i = 0; i < fileData.length; i++) {
ia[i] = fileData.charCodeAt(i);
}
var file = new Blob([ab], { type: fileType });
fileData = "";
if (window.navigator.msSaveOrOpenBlob) // IE10+
window.navigator.msSaveOrOpenBlob(file, fileName);
else { // Others
var a = document.createElement("a"),
url = URL.createObjectURL(file);
a.href = url;
a.download = fileName;
document.body.appendChild(a);
a.click();
setTimeout(function () {
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
}, 0);
}
return true;
} catch (e) {
... error stuff
}
}
I see already that a more RESTful approach would be to use headers to indicate chunk range and to separate the file meta data from the file chunks. Also I could try returning a data stream rather than Base64 encoded string. If anyone has tips on that let me know.
Well that was entirely the wrong way to go about that. In case it helps here's what I ended up doing.
Dynamically create the href address of an anchor tag to return a file (security token and parameters in query string)
get byte array from database
web api call return response message (see code below)
This is much faster and more reliable, but provides less in the way of progress monitoring.
business rule method uses...
...
file.ByteData = (byte[])ds.Tables[0].Rows[0]["ByteData"];
...
web api controller
public HttpResponseMessage ViewFileAttachment([FromUri] int id, string token) {
HttpResponseMessage response = new HttpResponseMessage();
... security stuff
fileInfoClass file = ... code to get file info
response.Content = new ByteArrayContent(file.ByteData);
response.Content.Headers.ContentDisposition =
new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment") {
FileName = file.FileName
};
response.Content.Headers.ContentType = new MediaTypeHeaderValue(file.MIMEType);
return response;
This could even be improved with streaming

Capture REST calls with Selenium

I run integration test with Selenium as a test runner and webdriver.io javascript library for Selenium API.
My test goes as follows:
I load an html page and click on a button. I want to check if a Get REST call was invoked.
I found a plugin for webdriver.io called webdriverajax that intend to fit to my requirements but it just doesn't work.
Any ideas how do capture rest calls?
You can achieve this by using custom HttpClient class that is out side from selenium code.As far as i know selenium doesn't support this feature.
Assume when you clicked the button it will called a REST service , the URL can be grab from the HTML DOM element.Then you can use your custom code to verify if URL is accessible or not.Then you can decide if your test is pass or failed based on the status code or some other your mechanism.
FileDownloader.java(Sample code snippet)
private String downloader(WebElement element, String attribute) throws IOException, NullPointerException, URISyntaxException {
String fileToDownloadLocation = element.getAttribute(attribute);
if (fileToDownloadLocation.trim().equals("")) throw new NullPointerException("The element you have specified does not link to anything!");
URL fileToDownload = new URL(fileToDownloadLocation);
File downloadedFile = new File(this.localDownloadPath + fileToDownload.getFile().replaceFirst("/|\\\\", ""));
if (downloadedFile.canWrite() == false) downloadedFile.setWritable(true);
HttpClient client = new DefaultHttpClient();
BasicHttpContext localContext = new BasicHttpContext();
LOG.info("Mimic WebDriver cookie state: " + this.mimicWebDriverCookieState);
if (this.mimicWebDriverCookieState) {
localContext.setAttribute(ClientContext.COOKIE_STORE, mimicCookieState(this.driver.manage().getCookies()));
}
HttpGet httpget = new HttpGet(fileToDownload.toURI());
HttpParams httpRequestParameters = httpget.getParams();
httpRequestParameters.setParameter(ClientPNames.HANDLE_REDIRECTS, this.followRedirects);
httpget.setParams(httpRequestParameters);
LOG.info("Sending GET request for: " + httpget.getURI());
HttpResponse response = client.execute(httpget, localContext);
this.httpStatusOfLastDownloadAttempt = response.getStatusLine().getStatusCode();
LOG.info("HTTP GET request status: " + this.httpStatusOfLastDownloadAttempt);
LOG.info("Downloading file: " + downloadedFile.getName());
FileUtils.copyInputStreamToFile(response.getEntity().getContent(), downloadedFile);
response.getEntity().getContent().close();
String downloadedFileAbsolutePath = downloadedFile.getAbsolutePath();
LOG.info("File downloaded to '" + downloadedFileAbsolutePath + "'");
return downloadedFileAbsolutePath;
}
TestClass.java
#Test
public void downloadAFile() throws Exception {
FileDownloader downloadTestFile = new FileDownloader(driver);
driver.get("http://www.localhost.com/downloadTest.html");
WebElement downloadLink = driver.findElement(By.id("fileToDownload"));
String downloadedFileAbsoluteLocation = downloadTestFile.downloadFile(downloadLink);
assertThat(new File(downloadedFileAbsoluteLocation).exists(), is(equalTo(true)));
assertThat(downloadTestFile.getHTTPStatusOfLastDownloadAttempt(), is(equalTo(200)));
// you can use status code to valid the REST URL
}
Here is the reference.
Note: This may not exactly fit into your requirement but you can get some idea and modify it accordingly to fit into your requirement.
Also refer the BrowserMob Proxy using this you can also achieve what you want.
The problem was the webdriver.io version. Apparently, webdriverajax works fine just with webdriver.io v3.x but not with v4.x. I use v4.5.2.
I decide not using a plugin and implement a mock for window.XMLHttpRequest
open and send methods, as follows:
proxyXHR() {
this.browser.execute(() => {
const namespace = '__scriptTests';
window[namespace] = { open: [], send: [] };
const originalOpen = window.XMLHttpRequest.prototype.open;
window.XMLHttpRequest.prototype.open = function (...args) {
window[namespace].open.push({
method: args[0],
url: args[1],
async: args[2],
user: args[3],
password: args[4]
});
originalOpen.apply(this, [].slice.call(args));
};
window.XMLHttpRequest.prototype.send = function (...args) {
window[namespace].send.push(JSON.parse(args[0]));
};
});
}
getXHRsInfo() {
const result = this.browser.execute(() => {
const namespace = '__scriptTests';
return window[namespace];
});
return result.value;
}

Is there a way I can automate the creation of .json files used for language translations?

I have files such as this that have translation keys and values:
locale-en.json
{
"CHANGE_PASSWORD": "Change Password",
"CONFIRM_PASSWORD": "Confirm Password",
"NEW_PASSWORD": "New Password"
}
locale-jp.json
{
"CHANGE_PASSWORD": "パスワードを変更します",
"CONFIRM_PASSWORD": "パスワードを認証します",
"NEW_PASSWORD": "新しいパスワード"
}
When I add a new translation key to the JSON file containing the English translations for example, I must remember to add that key and the associated translation to all the other JSON files. All the JSON files are also edited separately. The process is laborious and error prone.
Has anyone found a way to reduce the errors and to automate the process.
Ideally I would like to be able to run a script from Windows PowerShell that would change the files to this if an additional key was added to locale-en.json :
locale-en.json
{
"CHANGE_PASSWORD": "Change Password",
"CONFIRM_PASSWORD": "Confirm Password",
"NEW_PASSWORD": "New Password",
"NEW_KEY": "New Key"
}
locale-jp.json
{
"CHANGE_PASSWORD": "パスワードを変更します",
"CONFIRM_PASSWORD": "パスワードを認証します",
"NEW_PASSWORD": "新しいパスワード",
>>>"NEW_KEY": "New Key"
}
You could write something like this in powershell:
$masterFile = "locale-en.json"
function Get-LocaleMap($file){
$map = #{}
$localeJson = ConvertFrom-Json (gc $file -Raw)
$localeJson | gm -MemberType NoteProperty | % {
$map.Add($_.Name, ($localeJson | select -ExpandProperty $_.Name))
}
return $map
}
$masterLocale = Get-LocaleMap $masterFile
ls | ? { $_.Name -like "locale-*.json" -and $_.Name -ne $masterFile } | % {
$locale = Get-LocaleMap $_.FullName
$masterLocale.GetEnumerator() | % {
if(!$locale.ContainsKey($_.Key)){
$locale.Add($_.Key, $_.Value)
}
}
ConvertTo-Json $locale | Out-File -FilePath $_.FullName -Force -Encoding utf8
}
It created a dictionary from your English json file. Then it looks up all other locale files and checks them for keys which are present in the English file but missing from them. Then it adds the missing keys and values and saves the locale files in Unicode.
Let me show you how you can do the same with old school Windows Scripting since you seem to prefer JavaScript:
var masterFile = "locale-en.json"
var fso = new ActiveXObject("Scripting.FileSystemObject");
var scriptPath = fso.GetParentFolderName(WScript.ScriptFullName);
var charSet = 'utf-8';
var f = fso.GetFolder(scriptPath);
var fc = new Enumerator(f.files);
function getLocaleMap(fileName){
var path = scriptPath + '\\' + fileName;
var stream = new ActiveXObject("ADODB.Stream"); // you cannot use fso for utf-8
try{
stream.CharSet = charSet;
stream.Open();
stream.LoadFromFile(path);
var text = stream.ReadText();
var json = {};
eval('json = ' + text); // JSON.parse is not available in all versions
return json;
}
finally{
stream.Close();
}
}
function saveAsUtf8(fileName, text){
var path = scriptPath + '\\' + fileName;
var stream = new ActiveXObject("ADODB.Stream");
try{
stream.CharSet = charSet;
stream.Open();
stream.Position = 0;
stream.WriteText(text);
stream.SaveToFile(path, 2); // overwrite
}
finally{
stream.Close();
}
}
var locales = [];
var masterMap = getLocaleMap(masterFile);
for (; !fc.atEnd(); fc.moveNext())
{
var file = fc.item();
var extension = file.Name.split('.').pop();
if(extension != "json" || file.Name == masterFile){
continue;
}
var map = getLocaleMap(file.Name);
var newLocaleText = '{\r\n';
var i = 0;
for(var name in masterMap){
var value = '';
if(map[name]){
value = map[name];
}
else{
value = masterMap[name];
}
if(i > 0){
newLocaleText += ",\r\n";
}
newLocaleText += "\t'" + name + "': '" + value + "'";
i++;
}
newLocaleText += '\r\n}'
saveAsUtf8(file.Name, newLocaleText);
}
You can run the javascript from command line like this:
Cscript.exe "C:\yourscript.js"
I hope it helps.
Is there a way I can automate the creation of .json files used for language translations?
YES, executing automatic tasks is exactly what automation tools like Grunt and Gulp where designed to do.
As you said, doing things manually is laborious and error prone, so Grunt/Gulp are the way to go.
With a simple Grunt/Gulp config, all the relevant .json files can be watched simultaneously: any key added to any of them will be instantly detected, and order the execution of the custom script of your choice.
HOW GRUNT/GULP CAN DO IT:
Grunt/Gulp will constantly watch all the relevant JSON files;
When a change is detected in a watched file, a custom script is run;
The custom script will read the changed file and retrieve the new key(s) and value(s);
The custom script will then be write to all the other relevant JSON files.
CONFIGURING GRUNT
To detect file changes automatically and execute myCustomScript, just use grunt-contrib-watch like so:
watch: {
scripts: {
files: ['**/*.locale.json'],
tasks: ['myCustomScript'],
},
}
CUSTOM SCRIPT TO ADD THE NEW KEY(S) TO THE RELEVANT .JSON FILES:
grunt.event.on('watch', function(action, filepath) {
// filepath is the path to the file where change is detected
grunt.config.set('filepath', grunt.config.escape(filepath));
});
var myCustomScript=function(changedFile,keyFile){
var project = grunt.file.readJSON(changedFile);
//will store the file where changes were detected as a json object
var keys=grunt.file.readJSON(keyFile);
//will store keyFile as a json object
//walk changedFile keys, and check is keys are in keyFile
for (var key in project) {
if (project.hasOwnProperty(key)) {
if(!keys.hasOwnProperty(key)){
//a new key was detected
newKeyArray.push(key);
}
}
}
//should update all the other relevant JSON files with `grunt.file.write`, and add all the keys in newKeyArray:
var filesToChangeArray=grunt.file.match('**/*.locale.json');
//returns an array that contains all filepaths where change is desired
filesToChangeArray.forEach(function(path){
//walk newKeyArray to set addedContent string
newKeyArray.forEach(function(key){
addedContent+='"'+key+'":"to be set",';
//this will write all the new keys, with a value of "to be set", to the addedContent string
}
grunt.file.write(path,addedContent);
});
}
Ideally I would like to be able to run a script from Windows PowerShell
Even though Grunt/Gulp are often used to execute custom files written in javaScript/nodejs, they are well able to order the execution of scripts written in other languages.
To execute a PowerShell script, you could use a Grunt plugin called grunt-shell, like so:
grunt.initConfig({
shell: {
ps: {
options: {
stdout: true
},
command: 'powershell myScript.ps1'
}
}
});
as detailed in this SO post.
So if PowerShell is your thing, you could have the best of both worlds:
Easy detection with Grunt/Gulp watch;
PowerShell script execution when change is detected.
However, you might as easily use Grunt/Gulp only for this: as Grunt/Gulp is already taking care of the detection in the background, all you need to do is have it run a custom script that reads your new keys (grunt.file.readJSON) and copies them (grunt.file.write) to the relevant files.
Automated the process using a javascript solution with nodejs via command line.
$ node localeUpdater.js
This will watch your default locale (locale-en.json) with any revisions made and update your whole locale file list as necessary.
create the necessary locale file list if not present then initialized it with default locale data
add new keys based on default locale
remove missing keys based on default locale
localeUpdater.js
var fs = require("fs");
var localeFileDefault = "locale-en.json";
var localeFileList = ["locale-jp.json", "locale-ph.json"];
fs.watchFile(localeFileDefault, function() {
var localeDefault = readFile(localeFileDefault);
var localeCurrent = null;
var fileNameCurrent = null;
for (var i in localeFileList) {
fileNameCurrent = localeFileList[i];
console.log("Adding new keys from default locale to file " + fileNameCurrent);
localeCurrent = readFile(fileNameCurrent);
for (var key in localeDefault) {
if (!localeCurrent[key]) {
console.log(key + " key added.");
localeCurrent[key] = localeDefault[key];
}
}
console.log("Removing keys not on default locale to file " + fileNameCurrent);
for (var key in localeCurrent) {
if (!localeDefault[key]) {
console.log(key + " key removed.");
delete localeCurrent[key];
}
}
writeFile(fileNameCurrent, JSON.stringify(localeCurrent));
console.log("File " + fileNameCurrent + " updated.");
}
});
function readFile(fileName) {
var result = null;
if (fs.existsSync(fileName)) {
result = fs.readFileSync(fileName, "utf8");
result = result ? JSON.parse(result) : {};
} else {
writeFile(fileName, "{}");
result = {};
}
return result;
}
function writeFile(fileName, content) {
fs.writeFileSync(fileName, content, "utf8");
}
There are multiple safeguards you should put in place.
First off your translation function should have some safeguards. Something like:
function gettext(text) {
if (manifest[text]) {
return text;
}
return text;
}
I'm not sure how you register new strings, but we regex our code base for things like gettext('...') and then we compile a list of translations that way. A couple times a day we push that to a 3rd party translation company, which notices new strings. They populate new things and we pull content back. The "pull" involves a compilation to the different language files. The translation file compilation always falls back to english. In other words we download a file from the 3rd party and do something like:
_.map(strings, function(string) {
return localeManifest[locale][text] || localeManifest['en_US'][text];
}
This ensures that even if the manifest for the locale doesn't contain the translation yet we still populate it with the English US version.

Downloading a file in MVC app using AngularJS and $http.post

Any help is most welcomed and really appreciated.
I have an MVC action which retries a file content from a web service. This action is invoked from a Angular service (located in services.js) using $http.post(action, model), and the action is returning a FileContentResult object, which contains the byte array and the content type.
public ActionResult DownloadResults(DownloadResultsModel downloadResultsModel)
{
downloadResult = ... // Retrieving the file from a web service
Response.ClearHeaders();
Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", downloadResult.FileName));
Response.BufferOutput = false;
return new FileContentResult(downloadResult.Contents, downloadResult.ContentType);
}
The issue I'm having is about the browser not performing the default behavior of handing a file (for example, prompting to open it, saving it or cancel). The action is completed successfully with having the content of the file and the file name (injected to the FileContentResult object), but there s no response from the browser.
When I'm replacing the post with $window.location.href, and construct the URI myself, I'm hitting the action and after it completes the browser is handling the file as expected.
Does anyone can think of any idea how to complete the 'post' as expected?
Thanks,
Elad
I am using below code to download the file, given that the file does exist on the server and client is sending server the full path of the file...
as per you requirement change the code to specify path on server itself.
[HttpGet]
public HttpResponseMessage DownloadFile(string filename)
{
filename = filename.Replace("\\\\", "\\").Replace("'", "").Replace("\"", "");
if (!char.IsLetter(filename[0]))
{
filename = filename.Substring(2);
}
var fileinfo = new FileInfo(filename);
if (!fileinfo.Exists)
{
throw new FileNotFoundException(fileinfo.Name);
}
try
{
var excelData = File.ReadAllBytes(filename);
var result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new MemoryStream(excelData);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = fileinfo.Name
};
return result;
}
catch (Exception ex)
{
return Request.CreateResponse(HttpStatusCode.ExpectationFailed, ex);
}
}
and then on client side in angular:
var downloadFile = function (filename) {
var ifr = document.createElement('iframe');
ifr.style.display = 'none';
document.body.appendChild(ifr);
ifr.src = document.location.pathname + "api/GridApi/DownloadFile?filename='" + escape(filename) + "'";
ifr.onload = function () {
document.body.removeChild(ifr);
ifr = null;
};
};

How to properly create a new Producer?

I'm using the driven object model tool CodeFluentEntities in order to deploy a model to a DataBase engine.
I'm thinking about using localStorage database engines (like IndexedDB or Web SQL) in order to store my datas for a web application without server.
I looked into the documentation but it seems to me a little poor... I think I understood the basic principles like the injection points that are Produce() and Terminate() but what about the target directory of the actual production ?
In my case, which is Javascript source code files, how can I specify correctly (in a referenced manner) where to generate them ? And does it have to be in an external project, or could I just fill a directory in an other project (which is the .vsproj of my webapp, per example) ?
Can the documentation integrate a sample of code regarding this aspects, or someone can redirect me to an article fitting my needs ?
The Template approach
According to your needs, I suggest you to use a template instead of developing your custom Producer because of, among others, deployment reasons. Using the template producer (shipped with CodeFluent Entities) you can quickly and easily create complex scripts by taking advantage of the CodeFluent Entities meta model.
This producer is based on CodeFluent Entities' template engine and allow you to generate text files (JavaScript in your case) at production time.
As a reminder, A template is simply a mixture of text blocks and control logic that can generate an output file
This producer takes care of all common operations : update the project (.XXproj) to add your generated files, add missing references, etc.
You can find thereafter an example to generate an IndexDB script file based on a CodeFluent Entities model (demonstration purposes only). Here's the template source file :
[%# reference name="C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5\System.Core.dll" %]
[%# namespace name="System" %]
[%# namespace name="System.Linq" %]
[%# namespace name="CodeFluent.Model" %]
var context = {};
context.indexedDB = {};
context.indexedDB.db = null;
context.indexedDB.open = function () {
var version = 11;
var request = indexedDB.open([%=Producer.Project.DefaultNamespace%], version);
request.onupgradeneeded = function (e) {
var db = e.target.result;
e.target.transaction.onerror = context.indexedDB.onerror;
[%foreach(Entity entity in Producer.Project.Entities){
string properties = String.Join(", ", entity.Properties.Where(p => !p.IsPersistenceIdentity).Select(p => "\"" + p.Name + "\""));
%]
if (db.objectStoreNames.contains("[%=entity.Name%]")) {
db.deleteObjectStore("[%=entity.Name%]");
}
var store = db.createObjectStore("[%=entity.Name%]",
{ keyPath: "id", autoIncrement: true });
store.createIndex([%=properties %], { unique: false });[%}%]
};
request.onsuccess = function (e) {
context.indexedDB.db = e.target.result;
};
request.onerror = context.indexedDB.onerror;
};
[%foreach(Entity entity in Producer.Project.Entities){
string parameters = String.Join(", ", entity.Properties.Where(p => !p.IsPersistenceIdentity).Select(p => p.Name));%]
context.indexedDB.[%=entity.Name%] = {}
context.indexedDB.[%=entity.Name%].add = function ([%= parameters %]) {
var db = context.indexedDB.db;
var trans = db.transaction(["[%=entity.Name%]"], "readwrite");
var store = trans.objectStore("[%=entity.Name%]");
var request = store.put({
[%
foreach (Property property in entity.Properties.Where(p => !p.IsPersistenceIdentity)) {%]
"[%=property.Name%]": [%=property.Name%], [%}%]
"timeStamp": new Date().getTime()
});
request.onsuccess = function (e) {
console.log(e.value);
};
request.onerror = function (e) {
console.log(e.value);
};
};
context.indexedDB.[%=entity.Name%].delete = function (id) {
var db = context.indexedDB.db;
var trans = db.transaction(["[%=entity.Name%]"], "readwrite");
var store = trans.objectStore("[%=entity.Name%]");
var request = store.delete(id);
request.onsuccess = function (e) {
console.log(e);
};
request.onerror = function (e) {
console.log(e);
};
};
context.indexedDB.[%=entity.Name%].loadAll = function () {
var db = context.indexedDB.db;
var trans = db.transaction(["[%=entity.Name%]"], "readwrite");
var store = trans.objectStore("[%=entity.Name%]");
var keyRange = IDBKeyRange.lowerBound(0);
var cursorRequest = store.openCursor(keyRange);
request.onsuccess = function (e) {
// not implemented
};
request.onerror = function (e) {
console.log(e);
};
};
[%}%]
function init() {
context.indexedDB.open(); // initialize the IndexDB context.
}
window.addEventListener("DOMContentLoaded", init, false);
Then you need to configure your CodeFluent Entities Project by adding the Template Producer and define the template above as the source file.
If you consider the following model :
Just build it to generate the IndexDB script file in the target project (a web application for example) and you'll be able to manipulate the generated API like this :
context.indexedDB.Contact.add("Peter", "Boby")
context.indexedDB.Product.add("Tablet")
context.indexedDB.Product.add("Computer")
context.indexedDB.Contact.delete(1)
context.indexedDB.Product.loadAll()
The custom Producer approach
Nevertheless, if ever you need to target a technology or platform that isn't supported by CodeFluent Entities natively, you may create your own custom producer by implementing the IProducer interface :
public interface IProducer
{
event Producer.OnProductionEventHandler Production;
void Initialize(Project project, Producer producer);
void Produce();
void Terminate();
}
First of all, you need to understand that the CodeFluent Entitie Build engine calls each of your configured producers one by one to generate your code.
Firstly, CodeFluent Entities calls the Initialize method for each producers. It takes as a parameter an instance of the CodeFluent Entities project and the current producer.
Then it calls the Product method following the same process. It's the right place to implement your generation logic.
Finally, you could implement a finalize logic in the Terminate method.
CodeFluent provides some base classes that implement the IProducer interface such as BaseProducer which is located in CodeFluent.Producers.CodeDom assembly that provides behaviors like "add missing references" or "update Visual Studio project (.XXproj).
In addition, here's a blog post that can help you to integrate a custom producer to the modeler.
The Sub-Producer approach
An other approach might be to develop a custom Sub-Producer but, in my opinion, it is not suitable according to your needs.

Categories

Resources