Atlas Stream Processing, a solution that aggregates and enriches streams of high velocity, rapidly changing event data, and unifies working with data, is now in public preview.
In the transition from private to public preview, Atlas Stream Processing has focused on enhancing the developer experience to position itself as a go-to solution for development teams. A significant part of this enhancement includes the integration of Atlas Stream Processing with Visual Studio Code. The MongoDB VS Code plugin now supports connections to Stream Processing instances, enabling developers to create and manage processors within a familiar environment. This integration aims to streamline the development process by reducing the need to switch between different tools, thereby allowing developers to devote more time to building applications.
Another notable improvement in the public preview of Atlas Stream Processing is the advancement of its dead letter queue (DLQ) capabilities. DLQ enables effective stream processing, and the latest updates have made it even more powerful. Now, DLQ messages are more accessible and can be displayed directly during the execution of pipelines with sp.process() and when using .sample() on running processors. This enhancement eliminates the previous requirement for a separate target collection to serve as a DLQ, simplifying the development process and making it more efficient.
Atlas Stream Processing has enhanced its capabilities by adding features that bridge the gap between traditional database operations and real-time stream processing. The introduction of windowing functions and the integration for merging and emitting data to an Atlas database or a Kafka topic mark significant advancements. The public preview introduces the $lookup operator, allowing developers to enrich stream-processed documents with data from remote Atlas clusters by performing joins.
This enhancement, alongside the improved change streams feature which now supports pre- and post-imaging, empowers developers to handle complex data processing tasks such as calculating deltas between document fields and accessing full contents of deleted documents, thereby enabling more sophisticated customer experiences.
Atlas Stream Processing now supports conditional routing with dynamic expressions in the merge and emit stages, facilitating more nuanced data routing strategies based on document field values. This feature allows for dynamic forking of messages to different Atlas collections or Kafka topics, leveraging the Query API’s flexibility for diverse use cases. Additionally, the introduction of idle stream timeouts addresses the challenge of managing streams with inconsistent data flows by allowing streams to close automatically after a specified period of inactivity. These enhancements collectively aim to provide developers with more robust tools for real-time data processing, catering to the needs of advanced teams and enabling the delivery of richer, more responsive customer experiences.
“Public preview is a huge step forward for us as we expand the developer data platform and enable more teams with a stream processing solution that simplifies the operational complexity of building reactive, responsive, event-driven applications, while also offering an improved developer experience,” Clark Gates-George and Joe Niemiec from the MongoDB team wrote in a blog post.