Build Integration Services
Queues, Workers, Messables and Tables core funcionality that can be used to build efficient and resilient integration workflows.
How it works
Everything starts with a simple architecture based on these simple principles
- Users, or systems create records or objects on the platform
- A message is sent to the relevant Queues
- Workers fetch messages from the Queue and performs the logic, acking to the Queue when it has successfully processed the message or just retry again later
Enterstarts Platform Queues support Amazon Simple Queue service and also Kafka instance server. RabbitMQ support is in the works, and will be made available soon.
Amazon Simple Queue Service
Learn more about AWS SQS here.. View this example code (in scala) to highlight how to create a multi-threaded AWS SQS consumer.
You can learn more about kafka here.
Enterstarts Queue Service
Enterstarts Queue Services present a simple secured queue service api, designed to handle high-traffic mission critical queue put operations by supporting sqs and kafka engine and authenticating every request.
To get started, head to Queue Administration. The module will bring all the queues created and configured on the platform.
Open a queue and we get the details:
Queues have among other fields, the most important:
- Queue Name (in the case of kafka, the partition name)
- Queue Url (url on which the server is located, for sqs the queue link)
- Queue Type (sqs or kafka)
- Username & Password (in the cases of sqs fields put in the apikey and secret)
- Region (for sqs only)
Queues names should be unique. and that is all thats required to start publishing messages to them.
Enterstarts platform offers a simple api that allows publishing messages on the queues by either the id or the name of queue.
Publish message by queue-id:
Publish message by queue-name:
"contents":"Contents of Message"
The response will be something like this
AWS SQS Message
In this demo, we can see AWS SQS has received the message:
Enterstarts Workers are nodejs applications connected to queues that get messages from the associated queue and executes login, acking the message once processing has reached conclusion.
Workers are comprised of 1 or more handlers that can use the package system to execute core business logic upon processing messages.
Workers can process multiple (up to 10) messages in parallel.
Workers are similar to api-gateway applications, and so they come with useful and perhaps required libraries to perform data, network and i/o intensive tasks.
Among other libraries, we include:
- amqplib (rabbitmq)
- aws sdk
Worker Sneak Peak
Here's a sneak peak into one of our internal workers, that handles user on-boarding processes (signup, signin, forgot-password). We're using the nodemailer library here to send the email
context.runtime.AckMessage(); call, thats the sdk api that allows signaling to the Queue Server that the message has been successfully processed and can thus be removed from the queue.
Integration with ZStorage
ZStorage is the platform standard record and object storage system. ZStorage provides api's to:
- Create Tables & Fields
- Create Validation Rules
- Create and Update Records/Objects
- Query, Search and Paginate Records/Fields
- Creating custom table Pages (CRUD) for manipulation and query inside the Backoffice
ZStorage tables can be linked to Queues allowing messages to be published to the queue when records are created and updated on the zstorage engine. This operation is transactional both creation and publishing of messages are executed inside a transaction.
Please note that writes will suffer a latency and are now dependent on the Queue acepting new messages.
In this example, note how this table "Estudante", is linked to the queue sqs etx-event-log table, so upon every write, there will be message published to queue