Distributed, highly-scalable, data analytics processing engine

Xenon™: Even faster.

Xenon represents the evolution of our Helium engine and is a low latency, scalable data analytics solution designed to manage the retrieving, processing, and indexing of very large datasets, i.e., collections of billions of objects, spread across a tightly coupled cluster of servers, each with multi terabyte persistent storage capabilities. It is applicable in distributed systems making it an excellent vehicle through which to provide enterprise features to existing Big Data platforms.

Xenon is an OLAP/OLTP software stack. Helium was already the fastest object store and the Xenon architecture make performance even faster by getting closer to bare metal.

Our just-in-time compilation of the entire engine compiles the custom datapath, cache, and indexer from dataset schema. This allows random object access to petabytes of Flash with sub micro-sec access time, enabling true OLTP. In addition, it allows for sequential object manipulation on 100’s of TB of flash at sub micro-sec access time, thereby enabling fast OLAP.

This continues our path to decouple data parallelism from hardware parallelism and creates a platform for both analytics and real-time transaction processing – delivered using very dense Flash-based nodes with in-memory performance on commodity hardware.

SQL / Analytics processing engine

  • 1

    Performs on-the-fly translation of SQL queries to create a custom data-path that is compiled and executed on bare metal

  • 2

    Provides large dataset persistence and multi-tenancy

  • 3

    Provides a distributed storage class memory abstraction for Flash and NVRAM

  • 4

    Connectors that seamlessly integrate the engine with Apache Spark, allowing immediate and automatic offload of Spark operations to NV storage fabric

  • 5

    Query pushdown/ RDD/DataFrame integration with Apache Spark

  • 6

    Can connect to any major analytics or Machine Learning platform

Xenon Characteristics

Xenon Block Diagram

We call it "Distributed Storage Class Memory"

Xenon uses an abstraction layer called “distributed storage class memory” that allows Big Data applications to access the data as if it is in a very large persistent, highly available, indexed, memory pool  This immediately brings high availability and persistence to large working sets that are processed by Big Data platforms like Apache Spark. In addition, it offloads core analytics functions such as join and sort and runs those on the bare metal much more efficiently.