📄️ Process topology
Topology is a fundamental element of orchestration. It represents the definition of a process that can contain any actions or connectors to communicate with integrated services.
Workers we call services that directly interact with the orchestration layer and contain codes for individual topology actions (nodes).
ProcessDto is an object for data transfer in topologies. All worker interfaces always expect this object. ProcessDto consists of two main parts, which we describe below.
📄️ Starting events
This chapter describes the ways in which you can run processes in topologies. It is always a type of event that can expect a signal or is triggered by a cron or manually. For details on setting up each event, see editor.
In the topology detail, we have available an editor tool that allows us to model process topologies in the user interface using notation based on BPMN notation.
📄️ Applications and connectors
📄️ Results evaluation
Orchesty offers several options for evaluating remote service communication responses. In addition to a successful call, there are situations that indicate temporary problems where we may want to retry the call. We may also get a response with a code telling us that retrying is pointless. Then it depends on the specific case how the process should proceed.
When node has more than one follower, you can specify which of them should receive and continue to process this message.
To process batches of data, Orchesty offers several options that can be combined in topologies. This results in patterns that are used according to their different options. The basic property of a node is that with the reception of a single message, it can generate any number of messages at the output.
📄️ Data storage
In some cases, it is preferable to store the processed data continuously and work over the stored collection rather than sending all the data through queues. This is especially true for large batches of data and data with complicated downloads from a source or sources. If the process fails at some point, we can save ourselves from having to retrieve the data again.
The Limiter controls the frequency of requests to a specific API so that its allowed limits are not exceeded. For proper behavior, the limiter operates over all topologies and nodes simultaneously. Thus, it calculates the limit for a given application from the calls of all running processes.
The trash is a tool for persisting invalid or undelivered messages. Similar to user task, it allows you to preview and correct data. We can also see the message headers from which we can read the reason for discarding the message to the trash.
📄️ Performance optimization and ordering
The way we can affect the performance of topologies is by setting the prefetch of each node. The prefetch indicates the number of messages processed in parallel. Setting it to a value greater than 1 will significantly save resources and increase the throughput of the topology. This setting cannot guarantee that the order of messages is maintained.
For logging we use logger. We can use logs of type info and error.