Seamlessly interoperate cross-table between Apache Hudi, Delta Lake, and Apache Iceberg
Apache XTable™ is incubating in the Apache Software Foundation and was recently renamed from OneTable

Omni-Directional Interoperability

What is
Apache XTable™?

  • Apache XTable™ provides cross-table omni-directional interop between lakehouse table formats
  • Apache XTable™ is NOT a new or separate format, Apache XTable™ provides abstractions and tools for the translation of lakehouse table format metadata
  • Apache XTable™ is formerly known as OneTable
Choose your source format
Choose your destination format(s)
Apache XTable™ will translate the Metadata layers

Why build
Apache XTable™?

  • Choosing a table format is a costly evaluation
  • Each project has rich features that may fit different use-cases
  • Some vendors use a table format as a point of lock-in
  • Your data should be UNIVERSAL

Let's build together

Apache XTable™ is a community driven open source project. Come join us on Github and find easy ways to contribute.
Try it on GitHub


Got questions? We've got answers.

How does it work?


Apache XTable™ reads the existing metadata of your table and writes out metadata for one or more other table formats by leveraging the existing APIs provided by each table format project. The metadata will be persisted under a directory in the base path of your table (_delta_log for Delta, metadata for Iceberg, and .hoodie for Hudi). This allows your existing data to be read as though it was written using Delta, Hudi, or Iceberg. For example, a Spark reader can use“delta | hudi | iceberg>”).load(“path/to/data”). 

How is Apache XTable™ different from Delta Lake Uniform?


Apache XTable™ provides abstraction interfaces that allow omni-directional interoperability across Delta, Hudi, Iceberg, and any other future lakehouse table formats such as Apache Paimon. Apache XTable™ is a standalone github project that provides a neutral space for all the lakehouse table formats to constructively collaborate together.

Delta Lake Uniform is a one-directional conversion from Delta Lake to Apache Hudi or Apache Iceberg. Uniform is also governed inside the Delta Lake repo.

When should I consider Apache XTable™?


Apache XTable™ can be used to easily switch between any of the table formats or even benefit from more than one simultaneously. Some organizations use Apache XTable™ today because they have a diverse ecosystem of tools with polarized vendor support of table formats. Some users want lightning fast ingestion or indexing from Hudi and photon query accelerations of Delta Lake inside of Databricks. Some users want managed table services from Hudi, but also want write operations from Trino to Iceberg. Regardless of which combination of formats you need, Apache XTable™ ensures you can benefit from all 3 projects.

Does Apache XTable™ work in every cloud?


Yes, anywhere that Delta, Iceberg, or Hudi work, Apache XTable™ works.

What are the current limitations?


1. Hudi and Iceberg MoR tables not supported
2. Delta Delete Vectors are not supported
3. Synchronized transaction timestamps

With Apache XTable™ you pick one primary format and one or more secondary formats. The write operations with the primary format work as normal. Apache XTable™ than translates the metadata from the primary format to the secondaries. When committing the metadata of the secondary formats, the timestamp of the commit will not be the exact same timestamp as shown in the primary.

How can I contribute?


Come check out the project on Github and add a little star. There are some low hanging fruit features, bugs, and documentation that can be added. Reach out directly to any of the contributors on Github to ask for help.

How can I learn more?


Follow Apache XTable™ (Incubating) community channels on Linkedin and Twitter. Subscribe to the mailing list by sending an email to ( Follow the project on Github or reachout directly to any of the Github contributors to learn more.