I have a stand alone web resource linked from the site menu in a Model Driven App. It does multiple api calls using parent.Xrm.WebApi and currently works fine.
According to Microsoft deprecation notice, https://learn.microsoft.com/en-us/power-platform/important-changes-coming, parent.Xrm will be going away and no longer work for standalone Web Resource. Xrm.WebApi without referencing the parent throws an error "Xrm" is not defined.
Can anyone help with a supported way to access Xrm.Utility and Xrm.WebApi from a standalone web resource?
Latest documentation update:
The ClientGlobalContext.js.aspx page is deprecated and scheduled to be unavailable after April 1, 2022. Alternative methods to access global context information will be available by December, 2021.
On this particular piece, so far nothing documented as workaround or alternate solution from Microsoft. I created a github issue for the assistance and it is still open.
Between, I came to know about one of the MS employee unofficial side project and code sample from this blog. That is also about to be confirmed as supported.
A lot of organizations use ClientGlobalContext.js.aspx in HTML web
resources, and this means you will want to upgrade those HTML web
resources that utilize the ClientGlobalContext.js.aspx library as soon
as possible.
If you are embedding HTML web resources in Dynamics 365 / Power Apps
forms, you may want to look at using getContentWindow.
If you have stand-alone HTML web resources, or if you are embedding in
forms and would like a neat way to swap out
ClientGlobalContext.js.aspx with a new library, Microsoft employee
Christopher Nichols has built a solution that may be of interest to
you.
Christopher has a nice unofficial side project going on called
mock-xrm that, as you can imagine, mocks xrm. This means we can call
our Xrm functionality from within web resources that instead of the
legacy aspx page.
All we have to do is just download the Xrm.min.js file as web resource and this has to be referred in our HTML web resource to consume Xrm the same way as earlier.
We can keep a close eye on this library and github issue for further updates and concrete communication.
Related
I'm not an experienced web developer so I'm having some issues connecting some dots with regards to ClickUp API (In my case) but it can be probably be more generalized as well.
I've been doing a bit of research, but can't seem to get exactly what I'm looking for which makes me think I'm missing something.
In my case, the goal is to make certian information on ClickUp publicly VEIWABLE (not writable) and format the html/css ourselves. This would be veiwable to strangers without requiring authentication with ClickUp in any way.
ClickUp provides public links and embedd options:
My attempt(JS): postMessage(content, url) and document.getElementById(frameID)
Issue: These will fail in browsers that enable that cross-origin protection thing. Forcing strangers to disable this security is not viable for us. I've also tried alloworigin, anyorigin, etc with no luck.
ClickUp API 2.0 for POST, GET, etc
Attempt: I wanted to use this unofficial JS wrapper https://www.npmjs.com/package/clickup.js but didn't write anything out yet.
Issues: From my understanding, this requires the stranger to authenticate with ClickUp before being able to read information. This would require server-side code.
Other issues: This read-only page is intended to be hosted on github which doesn't support server-side calls (from my understanding) so the solution I'm looking for may not exist.
The most straight-forward way I can think to solve this is the parsing embedded iframe method but the cross-origin problems kills it dead in the water. At this point, I'm stuck and would appreciate any additional feedback.
To formalize my question:
What architecture/pieces would I need to connect a front-end web page with private ClickUp information in a read-only fashion that allows front-end HTML/CSS modifications without user authentication?
My web development knowledge doesn't really go beyond javascript and maybe a tiny bit of PHP so if there's other tools available for this please let me know and I'll look into them.
Thanks for the time
I have been searching for an answer to this problem now for several weeks. I also previously tried to research this a few years ago to no avail.
Problem Summary:
My company has developed a web-based data analytics suite for a major beverage distributor. They have recently asked for a feature that allows the user to print or download a visually pleasing version of the rendered app as a PDF. I have had no luck in finding a solid, controllable, or reliable method to do this. I was hoping the stack community might be able to point me in the right direction.
Current Tech Stack:
Plack servers
Perl base on the Dancer framework
Standard web dev front-ends: HTML5, CSS3, Javascript, Jquery/UI
Client is using IE9/10 and Chrome.
Attempted Solutions Summary:
Obviously I started with the window.print() and tried to control what printed using classes and a specialized print.css but the output was still awful.
I looked in to pdfmachine and pdfbox and even contacted Adobe's acrobat development team directly to see if they had an out of the box solution our company could purchase. I was informed that such a product would be counter intuitive to their desired business model of putting an acrobat subscription on each client computer rather than a single server side application.
I have extensively searched the stack articles but did not feel that the articles I found covered what I was looking for.
At present, I am all out of ideas and am hoping somebody out there has had better luck at this than I have.
tl;dr = I need a pdf version of the rendered output of a complex reporting app.
Thanks for your time stack, I appreciate it.
A solution I have used in the past is to use PhantomJS running on a server to generate the PDF for download/email. Usually if the content is sensitive the server (that handles authentication) would provide a single use viewing token that is then passed to a PhantomJS process. It loads the URL with the viewing token then saves as a PDF.
Further info on Phantoms screen cap API can be found here on GitHub.
https://github.com/ariya/phantomjs/wiki/Screen-Capture
Is it something you can create in Perl using PDF::API2 or PDF::Create? You can load and modify and existing PDF (handy if you want standard headers and footers), and then insert the relevant content. The learning curve can be a bit steep, but simple reports should be easy enough.
See PDF::TextBlock and PDF::Table too - they are great little helpers.
Consider this service http://pdfmyurl.com/ . I try to use many perl modules, but they dont satisfy my problems.
I am trying to do this tutorial, http://developer.yahoo.com/yql/guide/yql-code-examples.html#yql_javascript
and it says to use gadgets.io, but does not say anything about installing it. Where do I get it from?
I should also say that I am trying to perform the query from a socket.io server.
gadgets.io is a javascript service exposed by the OpenSocial API which is available through OpenSocial containers. OpenSocial containers are used to host 3rd party web components known as gadgets and were originally contributed by Google.
There is no such thing as 'getting' the gadgets.io library, as it is exposed to gadgets running on a container. I'm not familiar with YQL, but I had a look at the link you provided and the code example they demonstrate is meant to be executed within some type of playground environment # Yahoo I guess - therefore their environment should support gadgets, i.e. an OpenSocial container.
Have a look at the original documentation of the gadgets.io.makeRequest in order to see similar examples. Hope it helps :)
Besides Google Libraries API what other services are there for hosted javascript libraries?
Please only list trusted sources, not some unknown third party.
Microsofts CDN
http://www.asp.net/ajaxlibrary/cdn.ashx
Before you go in search of hosted JavaScript libraries, you should consider the fact that any JavaScript that you include in your web page runs within the context of your domain and can access any data rendered on the web page or that the user can normally access on your domain. Using Google's hosted JavaScript is fine, but if its some third party you never have heard of, you might want to think twice.
Perhaps it would be better to search for high-quality JavaScript libraries and download your own copy that you maintain within your domain on your own servers (and can audit for security purposes)?
Out of curiosity... what specific functionality are you looking for?
There's also Yahoo YUI (http://developer.yahoo.com/yui/) though I believe they only host YUI itself. Make sure you pay attention to Michael Safyan's answer, too - who you're willing to trust with your users' code should be a carefully made decision. Beyond that, if you're looking for generic JS hosting you should make sure you really need it - a minified version of jQuery or MooTools is incredibly tiny, and shouldn't make any real difference either to your server's CPU usage or bandwidth expenditure.
It also doesn't meaningfully affect the maintainability of your HTML or JS, and it introduces another point of failure in your implementation.
Does anyone know how disqus works?
It manages comments on a blog, but the comments are all held on third-party site. Seems like a neat use of cross-site communication.
The general pattern used is JSONP
Its actually implemented in a fairly sophisticated way (at least on the jQuery site) ... they defer the loading of the disqus.js and thread.js files until the user scrolls to the comment section.
The thread.js file contains json content for the comments, which are rendered into the page after its loaded.
You have three options when adding Disqus commenting to a site:
Use one of the many integrated solutions (WordPress, Blogger, Tumblr, etc. are supported)
Use the universal JavaScript code
Write your own code to communicate with the Disqus API
The main advantage of the integrated solutions is that they're easy to set up. In the case of WordPress, for example, it's as easy as activating a plug-in.
Having the ability to communicate with the API directly is very useful, and offers two advantages over the other options. First, it gives you as the developer complete control over the markup. Secondly, you're able to process comments server-side, which may be preferable.
Looks like that using easyXDM library, which uses the best available way for current browser to communicate with other site.
Quoting Anton Kovalyov's (former engineer at Disqus) answer to the same question on a different site that was really helpful to me:
Disqus is a third-party JavaScript application that runs in your browser and injects itself on publishers' websites. These publishers need to install a small snippet of JavaScript code that makes the first request to our servers and loads initial JavaScript loader. This loader then creates all necessary iframe elements, gets the data from our servers, renders templates and injects the result into some element on the page.
As you can probably guess there are quite a few different technologies supporting what seems like a simple operation. On the back-end you have to run and scale a gigantic web application that serves millions of requests (mostly read). We use Python, Django, PostgreSQL and Redis (for our realtime service).
On the front-end you have to minimize your payload, make sure your app is super fast and that it doesn't break in extremely hostile environments (you will be surprised how screwed up publisher websites can be). Cross-domain communication—ability to send messages from hosting website to your servers—can be tricky as well.
Unfortunately, it is impossible to explain how everything works in a comment on Quora, or even in an article. So if you're interested in the back-end side of Disqus just learn how to write, run and operate highly-scalable websites and you'll be golden. And if you're interested in the front-end side, Ben Vinegar and myself (both front-end engineers at Disqus) wrote a book on the topic called Third-party JavaScript (http://thirdpartyjs.com/).
I'm planning to read the book he mentioned, I guess it will be quite helpful.
Here's also a link to the official answer to this question on the Disqus site.
short answer? AJAX, you get your own url eg "site.com/?comments=ID" included via javascript... but with real time updates like that you would need a polling server.
I think they keep the content on their site and your site will only send & receive the data to/from disqus. Now I wonder what happens if you decide that you want to bring your commenting in house without losing all existing comments!. How easy would you get to your data I wonder? They claim that the data belongs to you, but they have the control over it, and there is not much explanation on their site about this.
I'm always leaving comment in disqus platform. Sometimes, comment seems to be removed once you refreshed it and sometimes it's not. I think the one that was removed are held for moderation without saying it.