below is the code i am using to validate that download made it to the downloads folder.
The file is of csv format. filepath is declared at the top.
the test passes but the command log outputs the entire content from the file for both the assertions(should and contains). how do i suppress the command log from outputting the entire body from the file. i already tried passing the false flag and it still outputs everything.
using cypress version 9.2
it('verify download',() => {
// some code here to fill in the form data and initiate download
cy.readFile(filepath,{
log : false
}).should('exist').and('contains',"some text to assert")
})
Related
I wanted to test if the content of the uploaded file and the downloaded file are the same. So, following is what I tried through cypress:
it.only(cfg.testname, () => {
// 1. Login to website, navigate to desired webapge
// 2. Upload the file
// 3. Download the file
// cy.wait(5000)
// 4. Read the uploaded file:
const fileContent = cy.fixture(_filePath)
console.log('fixture file path', '======>', _filePath)
console.log('fixture file content', '=====>', fileContent)
// 5. Read the downloaded file:
const downloadsFolder = Cypress.config("downloadsFolder")
const downloadedFileContent = cy.readFile(path.join(downloadsFolder, _fileName))
console.log('downloaded file path', '======>', path.join(downloadsFolder, fileName))
console.log('downloaded file content','====>', downloadedFileContent)
// 6. Check if they are equal:
expect(downloadedFileContent).equals(fileContent)
})
However, when I run this test, it does not even complete login step and immediately give asserition error one step 6, that is on expect()...:
AssertionError: expected { Object (userInvocationStack, specWindow, ...) } to equal {
Object (userInvocationStack, specWindow, ...) }
at Context.eval (VM753 tests:224)
When I comment step 6 expect()..., it correctly logins, uploads file and downloads file. So, I felt somehow I should make the process wait till download is complete before expect().... So I tried uncommenting cy.wait(5000), but no help. It still gives me above error (with of course expect()... uncommented).
Q1. Why this behavior?
Q2. How should I fix this?
PS: I am getting bunch of errors in the console which I am unable to understand. This is the screenshot of console:
The fixture read is async, so you need to use .then(), same with cy.readFile()
The use of path.join(downloadsFolder, _fileName) probably will not work as it's a Node command, substitute a string template instead
If you have a complicated file in JSON format, also try .to.deep.eq
cy.fixture(_filePath).then(fileContent => {
const downloadsFolder = Cypress.config("downloadsFolder")
const downloadPath = `${downloadsFolder}/${_fileName}`
cy.readFile(downloadPath).then(downloadedFileContent => {
expect(downloadedFileContent).equals(fileContent)
// or may need deep
// expect(downloadedFileContent).to.deep.eq(fileContent)
})
})
I'm a new developer and this is my first Stack Overflow post. I've tried to stick to the format as best as possible. It's a difficult issue for me to explain, so please let me know if there's any problems with this post!
Problem
I'm working on a vscode extension specifically built for Next.js applications and running into issues on an event listener for the onDidChangeText() method. I'm looking to capture data from a JSON file that will always be located in the root of the project (this is automatically generated/updated on each refresh of the test node server for the Next.js app).
Expected Results
The extension is able to look for updates on the file using onDidChangeText(). However, the issue I'm facing is on the initial run of the application. In order for the extension to start listening for changes to the JSON file, the user has to be in the JSON file. It's supposed to work no matter what file the user has opened in vscode. After the user visits the JSON file while the extension is on, it begins to work from every file in the Next.js project folder.
Reproducing this issue is difficult because it requires an extension, npm package, and a next.js demo app, but the general steps are below. If needed, I can provide code for the rest.
1. Start debug session
2. Open Next.js application
3. Run application in node dev
4. Do not open the root JSON file
What I've Tried
Console logs show we are not entering the onDidTextDocumentChange() block until the user opens the root JSON file.
File path to the root folder is correctly generated at all times, and prior to the promise being reached.
Is this potentially an async issue? Or is the method somehow dependent on the Active Window of the user to start looking for changes to that document?
Since the file is both created and updated automatically, we've tested for both, and neither are working until the user opens the root JSON file in their vscode.
Relevant code snippet (this will not work alone but I can provide the rest of the code if necessary. ).
export async function activate(context: vscode.ExtensionContext) {
console.log('Congratulations, your extension "Next Step" is now active!');
setupExtension();
const output = vscode.window.createOutputChannel('METRICS');
// this is getting the application's root folder filepath string from its uri
if (!vscode.workspace.workspaceFolders) {
return;
}
const rootFolderPath = vscode.workspace.workspaceFolders[0].uri.path;
// const vscode.workspace.workspaceFolders: readonly vscode.WorkspaceFolder[] | undefined;
// this gives us the fileName - we join the root folder URI with the file we are looking for, which is metrics.json
const fileName = path.join(rootFolderPath, '/metrics.json');
const generateMetrics = vscode.commands.registerCommand(
'extension.generateMetrics',
async () => {
console.log('Succesfully entered registerCommand');
toggle = true;
vscode.workspace.onDidChangeTextDocument(async (e) => {
if (toggle) {
console.log('Succesfully entered onDidChangeTextDocument');
if (e.document.uri.path === fileName) {
// name the command to be called on any file in the application
// this parses our fileName to an URI - we need to do this for when we run openTextDocument below
const fileUri = vscode.Uri.parse(fileName);
// open the file at the Uri path and get the text
const metricData = await vscode.workspace
.openTextDocument(fileUri)
.then((document) => {
return document.getText();
});
}
}
});
});
}
Solved this by adding an "openTextDocument" call inside the "registerCommand" block outside of the "onDidChangeTextDocument" function. This made the extension aware of the 'metrics.json' file without it being open in the user's IDE.
I'm building a script that reads log files, handles what needs to be handled then writes them to a database
Some caveats :
Some log files have a lot of input, multiple times a second
Some log files have few to no input at all
What I try in simple words:
Reading the first line of a file, then deleting this line to go to the next one, while I handle the first line, other lines could be added..
Issues I'm facing
When I try reading a file then processing it, then deleting the
files, some lines have been added
When the app crashes while
handling multiple lines at once for any reason, I can't know what
lines have been processed.
Tried so far
fs.readdir('logs/', (err, filenames) => {
filenames.forEach((filename) => {
fs.readFile('logs/'+filename, 'utf-8', (err, content) => {
//processing all new lines (can take multiple ms)
//deleting file
fs.unlink('logs/'+filename)
});
});
});
Is there not a (native or not) method to 'take' first line(s), or take all lines, from a file at once?
Something similar to what the Array.shift() method does to arrays..
Why you are reading the file at once. Instead you can use the node.js streams.
https://nodejs.org/api/fs.html#fs_class_fs_readstream
This will read the files and output to console
var fs = require('fs');
var readStream = fs.createReadStream('myfile.txt');
readStream.pipe(process.stdout);
You can also go for the npm package node-tail to read the content of a files while new content written to it.
https://github.com/lucagrulla/node-tail
If your log files has been writen as rotate logs. Example: Each hours has each log file, 9AM.log, 10AM.log....When you process the log files, you can skip current file and process another files. ex: now is 10:30 AM o’clock, skip file 10AM.log, solve another files.
I am running unit tests using QUnit and trying to integrate QUnit into our build automation and Continuous Integration process. For Atlassian Bamboo to parse the test output it requires the test output to be in an xml file. I can generate a console log that is in the required xml format by using the qunit-reporter-junit plugin. When Gulp-QUnit runs our test-runner.html file it outputs the console.log to the screen. My problem is that I cannot find a way to pipe this console.log output into a file.
I have tried the following approaches:
Using gulp-log-capture plugin (does nothing):
gulp.task('qunit', function() {
return gulp.src('./qunit/test-runner.html')
.pipe(logCapture.start(console,'log'))
.pipe(qunit())
.pipe(logCapture.stop('build.xml'));
});
Piping the output into a write stream (which throws an error):
gulp.task('qunit', function() {
return gulp.src('./qunit/test-runner.html')
.pipe(qunit())
.pipe(fs.createWriteStream('build.xml));
});
Using gulp-out plugin (which simply pipes the input html into the new file):
gulp.task('qunit', function() {
return gulp.src('./qunit/test-runner.html')
.pipe(qunit())
.pipe(out('build.xml');
});
The XML is right there on the screen I just need to get it into a file somehow.
It turns out that phantom js takes node-like scripts that will run on execution. I basically took the run-qunit script from the examples directory of phantom-js and adjusted it to pipe console output into a build.xml file. Example script can be found here: https://github.com/ariya/phantomjs/blob/master/examples/run-qunit.js
I simply adjusted the onConsoleMessage listener (ln48) like so:
page.onConsoleMessage = function(msg) {
if( msg.search(/xml/) > -1 ) {
fs.write('build.xml',msg);
}
console.log(msg);
};
To make this run as part of an automated build process, in Gulp I run the following task using the exec plugin.
exec = require('child_process').exec;
.
.
.
gulp.task('phantom',function() {
exec('phantomjs ./qunit/run-qunit.js ./qunit/test-runner.html');
});
Both of these adjustments successfully create a build.xml file that Atlassian Bamboo can read as part of its process.
My requirement is I need to check whether Chrome browser is insatlled on the client machine or not using Javascript. I have searched on the net not able to find the way out.
Please help in getting this done.
You can't do that with JavaScript, and even if you could, you shouldn't.
JavaScript on the client doesn't have access to the user's system, for very good reasons. (Think, servers with bad intentions.)
You can check if the browser is Chrome with the next code
if(!window.chrome){
//Chrome code
}else{
// Chrome block
}
You can't. Not with JavaScript. However, you can check whether the browser that is currently being used to view your webpage is Google Chrome or not.
<script type="text/javascript">
if(window.chrome){
document.write("Browser is Chrome");
}
else{
document.write("Please download Chrome");
}
</script>
You can't get that kind of information directly from javascript.
What you can do is use that PowerShell command in a script and save the result in a file that you'll read later using javascript.
Get-ItemProperty HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, InstallLocation, Publisher, InstallDate | Format-Table -AutoSize
This will get you all the installed programs on the machine from the HKEY_LOCAL_MACHINE registry folder.
The exact path to the folder from wich the informations are retrieved is : HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\
The given command will display you the application name followed by it's version, it's install location, publisher name and installation date in a PowerShell terminal.
If you want to output that list in a file simply add >FileName.txt after the command before pressing enter.
Note that by default the file will be created in the C:\Users\YourUserName\ folder so if you want the file to be created in a specific location you'll have to use the CD command to get to that specific location before executing the Get-Item-Property command.
This will get you done for the get installed programs on a machine part.
Now we can get into the check if app x is installed on the machine part.
First load the previously generated file in your js application you will use it's content to determine if an application is installed on the computer.
The faster way to get if 'chrome' is installed will be to load the file as a string and then do that basic stuff :
if (string.includes('chrome') == true) {
// chrome is installed on the machine
// you can do some more stuff
// like extracting it's path from the file content
} else {
console.log('error: chrome is not installed on this computer');
}
Needless to say that this will only work if used on the same computer from which you want to check the installed applications.
Edit: If you want a more practical file to use in javascript you can replace
Format-Table -AutoSize >FileName.txt
with :
Export-Csv -path .\FileName.txt -NoTypeInformation
this way you can split your file lines using the string.split(',') method and don't have to do some extra stuff to deal with the spaces between data.
Edit 2:
Here's a full working implementation that will let you retrieve informations from a PowerShell script directly from your javascript using NodeJs.
get_programs.ps1 (PowerShell script file) :
chcp 65001 # sets the encoding for displaying chars correctly
Get-ItemProperty HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, InstallLocation | ConvertTo-Csv -NoTypeInformation
chcp 850 # restores the default encoding set this will avoid police changes due to the terminal modifications
Notice the change at the end of the command which is now:
| ConvertTo-Csv -NoTypeInformation
this allows to log data in the PowerShell terminal in the csv format which will simplify it's parsing as a string.
If you don't want to use another file to hold those few PowerShell
commands you can use this
child = spawn("powershell.exe",[`chcp 65001
Get-ItemProperty HKLM:\\Software\\Wow6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* | Select-Object DisplayName, DisplayVersion, InstallLocation | ConvertTo-Csv -NoTypeInformation
chcp 850`]);
as a replacement for
child = spawn("powershell.exe",["./get_programs.ps1"]);
If you choose to do this don't forget to escape the \ chars else it will not work.
app.js :
var spawn = require("child_process").spawn,child;
child = spawn("powershell.exe",["./get_programs.ps1"]); // here we start our PowerShell script "./" means that it's in the same directory as the .js file
let chromeDetails;
child.stdout.on("data", (data) => { // data event
// here we receive each outputed line in the PowerShell terminal as an Uint8Array
if (data.includes('Chrome')) { // check for the 'Chrome' string in data
chromeDetails = data.toString(); // adds data converted as string
}
});
child.stderr.on("data", (data) => { // logs errors
console.log(`Powershell Errors: ${data}`);
});
child.on("exit", () => { // exit event
console.log("Powershell Script finished");
if (chromeDetails != undefined) {
console.log(`> chrome has been detected on this computer
available informations (appName, version, installPath):
${chromeDetails}`);
} else
console.log('> chrome has not been detected on this computer');
});
child.stdin.end(); // we end the child
Expected output :
Powershell Script finished
> chrome has been detected on this computer
available informations (appName, version, installPath):
"Google Chrome","103.0.5060.114","C:\Program Files\Google\Chrome\Application"
If you are not on Windows you may want to take a look at Spawning .bat and .cmd files on Windows from the NodeJs documentation to get hints on how to adapt the above app.js code to work on your system.