Arrow has two variants of it and this is one of them. Other variant has a seperate offsets array that you use to index into the active “field” array, so it is slower to process in most cases but is more compact
The cuDF interop in the roadmap [1] will be huge for my workloads. XGBoost has the fastest inference time on GPUs, so a fast path straight from these Vortex files to GPU memory seems promising.
Can you explain how it’s faster? GPU memory is just a blob with an address. Is it because the loading algorithms for vortex align better with XGBoost or just plain uploading to the GPU?
What you can do if you have gpu friendly format is you send compressed data over PCI-E and then decompress on the gpu. Thus your overall throughput will increase since PCI-E bandwidth is the limiting factor of the overall system.
That doesn’t explain how vortex is faster. Yes, you should send compressed data to the GPU and let it uncompress. You should maximize your PCI-E throughput to minimize latency in execution, but what does Vortex bring? Other than Parque bad, Vortex good.
The default writer will decompress the values, however, right now you can implement your own write strategy that will avoid doing it. We plan on adding that as an option since it’s quite common.
I've never understood why people say Feather file format isn't meant for "long-term" storage and prefer Parquet for that. Access is much faster from Feather, compression better with Parquet but Feather is really good.
Honestly I think Arrow makes Feather redundant. To answer your question, Parquet is optimized for storage on disk - can store with compression to take leas space, and might include clever tricks or some form of indices to query data from the file. Feather on the other hand is optimized for loading onto memory. It uses the same representation on disk as it does in memory. Very little in the way of compression (if any). No optimized for disk at all. BUT you can memory map a Feather file and randomly access any part of it in O(1) time (I believe, but do your own due diligence :)
Vortex is a file format, where as delta lake and iceberg are table formats. it should be compared to Parquet rather than delta lake and iceberg.
This guest lecture by a maintainer of Vortex provides a good overview of the file format, motivations for its creation and its key features.
Agreed, really need a tl;dr here, because Parquet is boring technology. Going to require quite the sales pitch to move. At minimum, I assume it will be years before I could expect native integration in pandas/polars/etc which would make it low effort enough to consider.
Parquet is ..fine, I guess. It is good enough. Why invoke churn? Sell me on the vision.
I think it would still make sense to compare with those table formats, or is the idea that you would only use this if you could not use a table format?
Vortex is, roughly, how you save data to files and Iceberg is the database-like manager of those files. You’ll soon be able to run Iceberg using Vortex because they are complementary, not competing, technologies.
I'd generally encourage new type systems to include sum types as a first-class concept.
[1] https://github.com/vortex-data/vortex/issues/2116
https://www.youtube.com/watch?v=zyn_T5uragA
Parquet is ..fine, I guess. It is good enough. Why invoke churn? Sell me on the vision.
Mutability would be one such pitch I would like to see ...
Vortex is, roughly, how you save data to files and Iceberg is the database-like manager of those files. You’ll soon be able to run Iceberg using Vortex because they are complementary, not competing, technologies.
There are other formats though that it can be compared to.
The Lance columnar format is one: https://github.com/lancedb/lancedb
And Nimble from Meta is another: https://github.com/facebookincubator/nimble
Parquet is so core to data infra and widespread, that removing it from its throne is a really really hard task.
The people behind these projects that are willing to try and do this, have my total respect.
This readme has what, max two or three emojis? Compare that to most LLM generated readmes with a zillion of emojis for every single feature.