Hacker News new | past | comments | ask | show | jobs | submit login

I played with Denodo (data virtualization software) a couple years ago and thought it was pretty legit.

In theory, it could be used to provide that industrial strength abstraction layer between your Tableau/Looker/etc. and your bajillion weird and not-so-weird (RDBMS) data sources.

That would seem to make sense to me from the point of view of -- I would want my data visualization/analytics-type company to be able to concentrate on data visualization/analytics, not building some insane and never-ending data abstraction layer.

The part that surprised me was that Denodo could allegedly do a lot of smart data caching, thus speeding things up (esp hadoop-oriented data sources) and keeping costs down.

I'm guessing the other data virtualization providers can do similar.




I have had to work with the Denodo for the past 1+ year, a total nightmare. Data virtualization is a "good in theory" concept but "doesn't work in practice" reality. Going back to the original sources for each query doesn't work, it will always be slower than using a proper analytics data warehouse. Caching doesn't help because at that point you can just do ETL. Also Denodo itself is full of weird behaviors and bugs, my team collectively decided it's worth the most hate of all the "enterprise" tools we use. One thing Denodo is good for is as an "access layer", but then maybe PrestoDB would be worth a shot or maybe even just a sqlalchemy and python.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: