#scheduled jobs sometimes run sometimes doesn't - javascript

I am working on a springboot project, I created a scheduler which has to run after every 10 mins, I have two pods running #shedlock is also implemented but it sometimes run sometimes doesn't run neither it throws any error, after sometime again it started executing.
Can anyone help on this

so the given information is not too much.
I can show you a working example in a distributed system, that I assume you have since you are mentioned pods that means your service is on Kubernetes.
application.yml/application.properties
your-app-config-root:
scheduling:
shedlock:
min-time-lock-should-kept: ${SHEDLOCK_MIN_TIME_LOCK_KEPT:PT10S}
max-time-lock-should-kept: ${SHEDLOCK_MAX_TIME_LOCK_KEPT:PT30S}
alert-on-unfinished-cleanup-scheduler:
enabled: ${ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_ENABLED:false}
cron: ${ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_CRON:0 0 8,11,14 * * *} # by default every day at 8, 11, 14
minutes-since-cleanup-job-unfinished: ${MINUTES_SINCE_CLEANUP_JOB_UNFINISHED:1440} # by default 24 hours
Configuration for Scheduler
import net.javacrumbs.shedlock.core.LockProvider;
import net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateLockProvider;
import net.javacrumbs.shedlock.spring.annotation.EnableSchedulerLock;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableScheduling;
import javax.sql.DataSource;
import static net.javacrumbs.shedlock.spring.annotation.EnableSchedulerLock.InterceptMode.PROXY_METHOD;
/**
* The defaultLockAtMostFor is the default value how long the lock should be kept in case the service which obtained the lock
* died before releasing it.
* This is just a fallback, under normal circumstances the lock is released as soon the tasks finishes.
*/
#Configuration
#EnableScheduling
#EnableSchedulerLock(interceptMode = PROXY_METHOD,
defaultLockAtMostFor = "${your-app-config-root.scheduling.shedlock.max-time-lock-should-kept}",
defaultLockAtLeastFor = "${your-app-config-root.scheduling.shedlock.min-time-lock-should-kept}")
public class SchedulingConfiguration {
#Bean
public LockProvider lockProvider(final DataSource dataSource) {
return new JdbcTemplateLockProvider(dataSource);
}
}
The Cron job itself
import io.your-app-config-root.platform.cleanup.CleanupService;
import io.your-app-config-root.platform.cleanup.metric.MetricService;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import net.javacrumbs.shedlock.spring.annotation.SchedulerLock;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import static io.your-app-config-root.platform.cleanup.scheduler.AlertOnUnfinishedCleanupJob.ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_PROPERTIES;
/**
* This scheduled job is responsible for alerting on unfinished cleanup jobs after 24 hour.
* ShedLock included for making sure not running concurrent jobs on the same tasks at the same time.
*/
#Slf4j
#Component
#ConditionalOnProperty(value = ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_PROPERTIES + ".enabled", havingValue = "true")
#RequiredArgsConstructor
public class AlertOnUnfinishedCleanupJob {
public static final String ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_PROPERTIES = "your-app-config-root.scheduling.alert-on-unfinished-cleanup-scheduler";
private final CleanupService cleanupService;
private final MetricService metricService;
#Scheduled(cron = "${" + ALERT_ON_UNFINISHED_CLEANUP_SCHEDULER_PROPERTIES + ".cron}")
#SchedulerLock(name = "alertOnUnfinishedCleanup")
public void alertOnUnfinishedCleanup() {
log.info("Alert on unfinished cleanup job has started.");
// DO THINGS
}
}
Notice that this cron job can be turned on or off via env var depending on your needs.
The sql that I used for db schema migration via flyway.
CREATE TABLE shedlock
(
name varchar(64) UNIQUE NOT NULL,
lock_until timestamp(3) NOT NULL,
locked_at timestamp(3) NOT NULL,
locked_by varchar(36) NOT NULL,
CONSTRAINT pk_schedlock_name PRIMARY KEY (name)
);
There are a plenty of online cron generator to help you to generate your cron expression.
You wrote you need to run your job every 10 min, so the right expression for it
"*/10 * * * * *".
If you have any further questoin, please give more details. :)

Related

Import abstractions into the '.sol' file that uses them instead of deploying them separately

token BigNumber { _hex: '0x5af3107a4000', _isBigNumber: true }
*** Deployment Failed ***
"SKYCrowdsale" is an abstract contract or an interface and cannot be deployed.
Import abstractions into the '.sol' file that uses them instead of deploying them separately.
Contracts that inherit an abstraction must implement all its method signatures exactly.
A contract that only implements part of an inherited abstraction is also considered abstract.
Exiting: Review successful transactions manually by checking the transaction hashes above on Etherscan.
Error: *** Deployment Failed ***
"SKY1Crowdsale" is an abstract contract or an interface and cannot be deployed.
Import abstractions into the '.sol' file that uses them instead of deploying them separately.
Contracts that inherit an abstraction must implement all its method signatures exactly.
A contract that only implements part of an inherited abstraction is also considered abstract.
at Deployer._preFlightCheck (C:\Users\yogeshk3\AppData\Roaming\npm\node_modules\truffle\build\webpack:\packages\deployer\src\deployment.js:185:1)
at processTicksAndRejections (internal/process/task_queues.js:89:5)
at C:\Users\yogeshk3\AppData\Roaming\npm\node_modules\truffle\build\webpack:\packages\deployer\src\deployment.js:294:1
Truffle v5.5.3 (core: 5.5.3)
Node v12.4.0
//SPDX-license-indetifier :unlicense;
pragma solidity ^0.5.0;
import "../node_modules/#openzeppelin/contracts/crowdsale/Crowdsale.sol";
import "../node_modules/#openzeppelin/contracts/crowdsale/validation/CappedCrowdsale.sol";
import "../node_modules/#openzeppelin/contracts/crowdsale/validation/TimedCrowdsale.sol";
import "../node_modules/#openzeppelin/contracts/crowdsale/validation/WhitelistCrowdsale.sol";
import "../node_modules/#openzeppelin/contracts/crowdsale/distribution/RefundableCrowdsale.sol";
import "../node_modules/#openzeppelin/contracts/crowdsale/emission/MintedCrowdsale.sol";
import "../node_modules/#openzeppelin/contracts/token/ERC20/ERC20Mintable.sol";
import "../node_modules/#openzeppelin/contracts/token/ERC20/TokenTimelock.sol";
import "../node_modules/#openzeppelin/contracts/token/ERC20/ERC20Pausable.sol";
contract SKYCrowdsale is Crowdsale, CappedCrowdsale,WhitelistCrowdsale ,MintedCrowdsale{
// Track investor contributions
uint256 public investorMinCap = 2000000000000000; // 0.002 ether
uint256 public investorHardCap = 50000000000000000000; // 50 ether
using RefundableCrowdsale for RefundableCrowdsale;
mapping(address => uint256) public contributions;
constructor(
uint _rate,
address payable _wallet,
IERC20 token,
uint256 _cap,
uint256 _goal
)
public
Crowdsale(_rate ,_wallet ,token)
CappedCrowdsale(_cap)
{
}
/**
* #dev Returns the amount contributed so far by a sepecific user.
* #param _beneficiary Address of contributor
* #return User contribution so far
*/
function getUserContribution(address _beneficiary)
public view returns (uint256)
{
return contributions[_beneficiary];
}
/**
* #dev Extend parent behavior requiring purchase to respect investor min/max funding cap.
* #param _beneficiary Token purchaser
* #param _weiAmount Amount of wei contributed
*/
function _updatePurchasingState(
address _beneficiary,
uint256 _weiAmount
)
internal
{
super._preValidatePurchase(_beneficiary, _weiAmount);
uint256 _existingContribution = contributions[_beneficiary];
uint256 _newContribution = _existingContribution.add(_weiAmount);
require(_newContribution >= investorMinCap && _newContribution <= investorHardCap);
contributions[_beneficiary] = _newContribution;
}
}

Cannot import Amplitude SDK in a Ionic 3 app

I'm trying to use Amplitude SDK to get statistics from my Ionic 3 app. However, since the app is written in TypeScript with a certain file architecture, it is not as simple as in the official documentation.
However, I found the #types/amplitude-js package and I thought it would resolve all my problems. But unfortunately, when I compile my app on my device using ionic cordova run android --device, the app doesn't load and I get the following error message:
Uncaught Error: Encountered undefined provider!
Usually this means you have a circular dependencies (might be caused by using 'barrel' index.ts files.
Note: this error also appears when I run ionic serve.
Here is what I did, step by step:
I installed the #types/amplitude-js by running npm install --save #types/amplitude-js.
I installed the original Amplitude SDK by running npm install amplitude-js. I noticed it was necessary to do that, otherwise my app wouldn't compile with only the #type package (which makes sense).
I added the following lines to my app.module.ts
import { AmplitudeClient } from 'amplitude-js';
[...]
#NgModule({
[...]
providers: [
AmplitudeClient,
[...]
]
});
I also created an AmplitudeProvider, which will manage all my Amplitude events throughout my app:
import { Injectable } from '#angular/core';
import { HttpServiceProvider } from "../http-service/http-service";
import { AmplitudeClient } from 'amplitude-js';
/**
* AmplitudeProvider
* #description Handles Amplitude statistics
*/
#Injectable()
export class AmplitudeProvider {
constructor(
public http: HttpServiceProvider,
public amplitude: AmplitudeClient
) {
this.amplitude.init("MY_AMPLITUDE_KEY");
}
/**
* logEvent
* #description Logs an event in Amplitude
* #param eventTitle Title of the event
*/
public logEvent(title) {
// Do things not relevant here
}
}
I'm certain that I'm doing something wrong with my dependency injection and/or my imports, but I don't understand what. And I don't see any circular dependency, since the amplitude-js package is not made by me and does not import any of my providers.
Thanks in advance to anyone who will point me in the right direction!
AmplitudeClient is not a Ionic Provider, therefore, you can't import it and put in your Class constructor.
To use amplitude in your Provider you want to import amplitude. Your code should be similar to this.
import amplitude, { AmplitudeClient } from 'amplitude-js';
#Injectable()
export class AmplitudeProvider {
private client: AmplitudeClient;
constructor(
public http: HttpServiceProvider,
public database: DatabaseProvider
) {
this.client = amplitude.getInstance();
this.client.init("MY_AMPLITUDE_KEY");
}
}

React Native Android - custom BroadcastReceiver/Service is being killed after some hours

I'm building an app using React Native which requires a service that detects missed calls and sends that on the server and then shows a notification in phone status bar.
I decided to write my own extension that will handle that because I didn't found any node module that will be sufficient for my needs. Unfortunately, service is being killed after some hours and I can't handle with that. Basically, I'm JavaScript developer and native code in Java is for me like a black hole so I'll be very grateful for any help.
The app is using Headless JS for sending data to the server, basically all extension was based on articles:
http://www.learn-android-easily.com/2013/06/detect-missed-call-in-android.html
https://codeburst.io/simple-android-call-recorder-in-react-native-headlessjs-task-614bcc56efc4
I've found some similar topics:
Android service process being killed after hours
https://fabcirablog.weebly.com/blog/creating-a-never-ending-background-service-in-android
And tried to follow the instructions described there, but all of these solutions are related only with native code without using React Native and Headless JS so I don't know if that solutions will be ok for app that using React Native or probably (for sure) I'm doing something wrong.
Here is my AndroidManifest part responsible for Service and BroadcastReceiver:
(...)
<service android:name="com.app.service.CallLogService" />
<receiver android:name="com.app.receiver.CallLogReceiver">
<intent-filter android:priority="0">
<action android:name="android.intent.action.PHONE_STATE" />
</intent-filter>
</receiver>
(...)
My CallLogService class:
package com.app.service;
import android.content.Intent;
import android.os.Bundle;
import com.facebook.react.HeadlessJsTaskService;
import com.facebook.react.bridge.Arguments;
import com.facebook.react.jstasks.HeadlessJsTaskConfig;
import javax.annotation.Nullable;
public class CallLogService extends HeadlessJsTaskService {
#Nullable
protected HeadlessJsTaskConfig getTaskConfig(Intent intent) {
Bundle extras = intent.getExtras();
return new HeadlessJsTaskConfig(
"CallLog",
extras != null ? Arguments.fromBundle(extras) : null,
5000,
true
);
}
}
My CallLogReceiver class:
package com.app.receiver;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.util.Log;
import com.app.service.CallLogService;
import com.facebook.react.HeadlessJsTaskService;
public final class CallLogReceiver extends BroadcastReceiver {
public final void onReceive(Context context, Intent intent) {
(...)
callerPhoneNumber = intent.getStringExtra("incoming_number");
Intent callIntent = new Intent(context, CallLogService.class);
callIntent.putExtra("phone_number", callerPhoneNumber);
context.startService(callIntent);
HeadlessJsTaskService.acquireWakeLockNow(context);
}
}
I'm using React Native 0.50.3
At the end I have an additional question. I noticed that after restart phone the service is also killed. How can I prevent such situation too?
Edit:
I noticed that if app is in background then after sending request the response is not recorded.
But after firing a new request the application is getting response from previous request. I'm using axios for doing ajax.
Eg.
let callAjax = function(counter){
console.log('Request ' + counter);
axios.get('/user?ID=12345')
.then(function (resp) {
console.log('Response ' + counter);
})
};
callAjax(1);
setTimeout(() => {
callAjax(2);
}, 5000);
When app is in background then I have:
Request 1
After 5 sec
Response 1 Request 2
When app is in foreground then everything is ok:
Request 1 Response 1
After 5 sec
Request 2 Response 2

Meteor - Slow loading with large dataset

I'm working with quite large data in MongoDB and using it in my Meteor application. However, the size of the data is causing the webpage to load incredibly slowly.
The collection is around 17MB in size and contains 84,000 documents.
Using the Publish/Subscribe method I have the following code:
Imports -> Both -> MyCollection.js:
import { Mongo } from 'meteor/mongo';
export const statistics = new Mongo.Collection('statistics');
Server -> Main.js:
import { Meteor } from 'meteor/meteor';
import { HTTP } from 'meteor/http';
import { statistics } from '/imports/both/MyCollection';
Meteor.publish('statistics', function publishSomeData() {
return statistics.find();
});
Client -> Main.js:
import { Meteor } from 'meteor/meteor';
import { Template } from 'meteor/templating';
import { ReactiveVar } from 'meteor/reactive-var';
import { statistics } from '/imports/both/MyCollection';
import './main.html';
Template.application.onCreated(function applicationOnCreated() {
this.subscribe('statistics');
});
Template.application.onRendered(function applicationOnRendered() {
this.autorun(() => {
if (this.subscriptionsReady()) {
const statisticsData = statistics.findOne();
console.log(statisticsData);
}
});
});
So like I say this method works and the console logs the data. However, using an internet connection of around 60mbps it takes around 2 minutes to load the page and finally console log the data and sometimes I just get the 'Google is not responding' alert and I'm forced to force quit.
What is a more efficient means of loading the data into the application in order to avoid this terribly slow loading time? Any help would be greatly appreciated.
Many thanks,
G
Limit the amount of data you publish to the client.
Either only publish some fields of the statistics collection or 'lazy load' documents - pass a number of docs argument to the publication and use the limit option of find to only send that many docs to the client.
Alternatively, compile the data as needed on the server and only send the compiled data to the client.
Much more specific examples cannot be given without knowing the collection's nature.

Can Selenium IDE/Builder run same test case on many pages?

Is there a way to run the same Selenium test case on many pages without specifically defining a list of pages?
Say, for example, I have a UIMap pageset defined like this:
var map = new UIMap();
map.addPageset({
name: 'pages',
description: 'all pages',
pathRegexp: '^thisistheroot/$'
});
In the pageset, I have all the elements defined for a test script that I want to test on each page in the pageset.
All of this is added to my core extensions.
Am I able to run a test case on the entire pageset? How can I do that?
I've looked into the issue a little more. Is there a way this is possible with Jenkins? https://jenkins-ci.org/
Edit:
I was trying to avoid using selenium webdriver, but if it is possible to obtain links as you would in a UIMap, that would probably point me in the right direction as well. I would try to iterate over the links with a single test case, which can easily be done in java. I'm using java for webdriver by the way.
Thanks.
The simple answer is "no" but Selenium WebDriver is one of the best choices for sure to find the links of a page and iterate over them. There is a very similar concept of your UIMapping called PageFactory where you map all the page elements in separate classes to keep the responsibility separate which makes debugging and refactoring much easier. I have used the PageFactory concept here.
Now coming back to your question, you can easily find the list of the links present in a page. In that case you just need to write the selector little carefully. You can then easily iterate over the links and come back and forth and so on.
Proof of concept on Google
BasePage
package google;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.support.PageFactory;
import org.openqa.selenium.support.ui.ExpectedCondition;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.util.NoSuchElementException;
/**
* Defines the generic methods/functions for PageObjects.
*/
public class BaseClass {
protected WebDriver driver;
/**
* #param _driver
* #param byKnownElement
*/
public BaseClass(WebDriver _driver, By byKnownElement) {
//assigning driver instance globally.
driver = _driver;
this.correctPageLoadedCheck(byKnownElement);
/* Instantiating all elements since this is super class
and inherited by each and every page object */
PageFactory.initElements(driver, this);
}
/**
* Verifies correct page was returned.
*
* #param by
*/
private void correctPageLoadedCheck(By by) {
try {
driver.findElement(by).isDisplayed();
} catch (NoSuchElementException ex) {
throw ex;
}
}
}
PageObject inherited BasePage
package google;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.FindBy;
import org.openqa.selenium.support.How;
import java.util.List;
/**
* Created by Saifur on 5/30/2015.
*/
public class GoogleLandingPage extends BaseClass {
private static final By byKnownElement = By.xpath("//a[text()='Sign in']");
/**
* #param _driver
*/
public GoogleLandingPage(WebDriver _driver) {
super(_driver, byKnownElement);
}
//This should find all the links of the page
//You need to write the selector such a way
// so that it will grab all intended links.
#FindBy(how = How.CSS,using = ".gb_e.gb_0c.gb_r.gb_Zc.gb_3c.gb_oa>div:first-child a")
public List<WebElement> ListOfLinks;
}
BaseTest
package tests;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
public class BaseTest {
public WebDriver driver;
String url = "https://www.google.com/";
#BeforeClass
public void SetUpTests() {
driver = new FirefoxDriver();
//Navigate to url
driver.navigate().to(url);
//Maximize the browser window
driver.manage().window().maximize();
}
#AfterClass
public void CleanUpDriver() throws Exception {
try {
driver.quit();
}catch (Exception ex){
throw ex;
}
}
}
Link Iterator test inheriting BaseTest
package tests;
import google.GoogleLandingPage;
import org.openqa.selenium.WebElement;
import org.testng.annotations.Test;
import java.util.List;
/**
* Created by Saifur on 5/30/2015.
*/
public class LinksIteratorTests extends BaseTest {
#Test
public void IterateOverLinks(){
GoogleLandingPage google = new GoogleLandingPage(driver);
List<WebElement> elementList = google.ListOfLinks;
for (int i=0;i<elementList.size(); i++){
elementList.get(i).click();
//possibly do something else to go back to the previous page
driver.navigate().back();
}
}
}
Note: I am using TestNG to maintain the tests and please note for lazy loading page you may need to add some explicit wait if necessary
Actually it's simple to run an IDE test against 1 specific page (base url actually): java -jar selenium-server.jar -htmlSuite "*firefox" "http://baseURL.com" "mytestsuite.html" "results.html"
So what you need to do is use jenkins (or any bash/batch script) to run that command multiple times with the base url set as "http://baseURL.com/page1", "http://baseURL.com/page2", etc.
This will only get you as far as static list of pages to test against. If you want a dynamic list you'd have to also "crawl" the pages and you could do that in the similar batch/bash script to obtain the list of pages to test against.
In this case you'd best be investing beyond selenium IDE and switch to webdriver where you'll have more power to loop and flow control.

Categories

Resources