Alternatives to Presto logo

Alternatives to Presto

Apache Spark, Stan, Apache Impala, Snowflake, and Apache Drill are the most popular alternatives and competitors to Presto.
392
1K
+ 1
66

What is Presto and what are its top alternatives?

Distributed SQL Query Engine for Big Data
Presto is a tool in the Big Data Tools category of a tech stack.
Presto is an open source tool with GitHub stars and GitHub forks. Here’s a link to Presto's open source repository on GitHub

Top Alternatives to Presto

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Stan
    Stan

    A state-of-the-art platform for statistical modeling and high-performance statistical computation. Used for statistical modeling, data analysis, and prediction in the social, biological, and physical sciences, engineering, and business. ...

  • Apache Impala
    Apache Impala

    Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. ...

  • Snowflake
    Snowflake

    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn. ...

  • Apache Drill
    Apache Drill

    Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage systems. It was inspired in part by Google's Dremel. ...

  • Druid
    Druid

    Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations. ...

  • JavaScript
    JavaScript

    JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...

  • Git
    Git

    Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...

Presto alternatives & related posts

Apache Spark logo

Apache Spark

2.9K
3.5K
140
Fast and general engine for large-scale data processing
2.9K
3.5K
+ 1
140
PROS OF APACHE SPARK
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
  • 2
    In memory Computation
CONS OF APACHE SPARK
  • 4
    Speed

related Apache Spark posts

Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 10M views

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

See more
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Stan logo

Stan

64
27
0
A Probabilistic Programming Language
64
27
+ 1
0
PROS OF STAN
    Be the first to leave a pro
    CONS OF STAN
      Be the first to leave a con

      related Stan posts

      Apache Impala logo

      Apache Impala

      145
      300
      18
      Real-time Query for Hadoop
      145
      300
      + 1
      18
      PROS OF APACHE IMPALA
      • 11
        Super fast
      • 1
        Massively Parallel Processing
      • 1
        Load Balancing
      • 1
        Replication
      • 1
        Scalability
      • 1
        Distributed
      • 1
        High Performance
      • 1
        Open Sourse
      CONS OF APACHE IMPALA
        Be the first to leave a con

        related Apache Impala posts

        I have been working on a Java application to demonstrate the latency for the select/insert/update operations on KUDU storage using Apache Kudu API - Java based client. I have a few queries about using Apache Kudu API

        1. Do we have JDBC wrapper to use Apache Kudu API for getting connection to Kudu masters with connection pool mechanism and all DB operations?

        2. Does Apache KuduAPI supports order by, group by, and aggregate functions? if yes, how to implement these functions using Kudu APIs.

        3. How can we add kudu predicates to Kudu update operation? if yes, how?

        4. Does Apache Kudu API supports batch insertion (execute the Kudu Insert for multiple rows at one go instead of row by row)? (like Kudusession.apply(List);)

        5. Does Apache Kudu API support join on tables?

        6. which tool is preferred over others (Apache Impala /Kudu API) for read and update/insert DB operations?

        See more
        Snowflake logo

        Snowflake

        1.1K
        1.2K
        27
        The data warehouse built for the cloud
        1.1K
        1.2K
        + 1
        27
        PROS OF SNOWFLAKE
        • 7
          Public and Private Data Sharing
        • 4
          Multicloud
        • 4
          Good Performance
        • 4
          User Friendly
        • 3
          Great Documentation
        • 2
          Serverless
        • 1
          Economical
        • 1
          Usage based billing
        • 1
          Innovative
        CONS OF SNOWFLAKE
          Be the first to leave a con

          related Snowflake posts

          I'm wondering if any Cloud Firestore users might be open to sharing some input and challenges encountered when trying to create a low-cost, low-latency data pipeline to their Analytics warehouse (e.g. Google BigQuery, Snowflake, etc...)

          I'm working with a platform by the name of Estuary.dev, an ETL/ELT and we are conducting some research on the pain points here to see if there are drawbacks of the Firestore->BQ extension and/or if users are seeking easy ways for getting nosql->fine-grained tabular data

          Please feel free to drop some knowledge/wish list stuff on me for a better pipeline here!

          See more
          Shared insights
          on
          Google BigQueryGoogle BigQuerySnowflakeSnowflake

          I use Google BigQuery because it makes is super easy to query and store data for analytics workloads. If you're using GCP, you're likely using BigQuery. However, running data viz tools directly connected to BigQuery will run pretty slow. They recently announced BI Engine which will hopefully compete well against big players like Snowflake when it comes to concurrency.

          What's nice too is that it has SQL-based ML tools, and it has great GIS support!

          See more
          Apache Drill logo

          Apache Drill

          71
          170
          16
          Schema-Free SQL Query Engine for Hadoop and NoSQL
          71
          170
          + 1
          16
          PROS OF APACHE DRILL
          • 4
            NoSQL and Hadoop
          • 3
            Free
          • 3
            Lightning speed and simplicity in face of data jungle
          • 2
            Well documented for fast install
          • 1
            SQL interface to multiple datasources
          • 1
            Nested Data support
          • 1
            Read Structured and unstructured data
          • 1
            V1.10 released - https://drill.apache.org/
          CONS OF APACHE DRILL
            Be the first to leave a con

            related Apache Drill posts

            Druid logo

            Druid

            379
            865
            32
            Fast column-oriented distributed data store
            379
            865
            + 1
            32
            PROS OF DRUID
            • 15
              Real Time Aggregations
            • 6
              Batch and Real-Time Ingestion
            • 5
              OLAP
            • 3
              OLAP + OLTP
            • 2
              Combining stream and historical analytics
            • 1
              OLTP
            CONS OF DRUID
            • 3
              Limited sql support
            • 2
              Joins are not supported well
            • 1
              Complexity

            related Druid posts

            Shared insights
            on
            DruidDruidMongoDBMongoDB

            My background is in Data analytics in the telecom domain. Have to build the database for analyzing large volumes of CDR data so far the data are maintained in a file server and the application queries data from the files. It's consuming a lot of resources queries are taking time so now I am asked to come up with the approach. I planned to rewrite the app, so which database needs to be used. I am confused between MongoDB and Druid.

            So please do advise me on picking from these two and why?

            See more

            My process is like this: I would get data once a month, either from Google BigQuery or as parquet files from Azure Blob Storage. I have a script that does some cleaning and then stores the result as partitioned parquet files because the following process cannot handle loading all data to memory.

            The next process is making a heavy computation in a parallel fashion (per partition), and storing 3 intermediate versions as parquet files: two used for statistics, and the third will be filtered and create the final files.

            I make a report based on the two files in Jupyter notebook and convert it to HTML.

            • Everything is done with vanilla python and Pandas.
            • sometimes I may get a different format of data
            • cloud service is Microsoft Azure.

            What I'm considering is the following:

            Get the data with Kafka or with native python, do the first processing, and store data in Druid, the second processing will be done with Apache Spark getting data from apache druid.

            the intermediate states can be stored in druid too. and visualization would be with apache superset.

            See more
            JavaScript logo

            JavaScript

            350.7K
            267K
            8.1K
            Lightweight, interpreted, object-oriented language with first-class functions
            350.7K
            267K
            + 1
            8.1K
            PROS OF JAVASCRIPT
            • 1.7K
              Can be used on frontend/backend
            • 1.5K
              It's everywhere
            • 1.2K
              Lots of great frameworks
            • 896
              Fast
            • 745
              Light weight
            • 425
              Flexible
            • 392
              You can't get a device today that doesn't run js
            • 286
              Non-blocking i/o
            • 236
              Ubiquitousness
            • 191
              Expressive
            • 55
              Extended functionality to web pages
            • 49
              Relatively easy language
            • 46
              Executed on the client side
            • 30
              Relatively fast to the end user
            • 25
              Pure Javascript
            • 21
              Functional programming
            • 15
              Async
            • 13
              Full-stack
            • 12
              Setup is easy
            • 12
              Its everywhere
            • 12
              Future Language of The Web
            • 11
              JavaScript is the New PHP
            • 11
              Because I love functions
            • 10
              Like it or not, JS is part of the web standard
            • 9
              Expansive community
            • 9
              Everyone use it
            • 9
              Can be used in backend, frontend and DB
            • 9
              Easy
            • 8
              Easy to hire developers
            • 8
              No need to use PHP
            • 8
              For the good parts
            • 8
              Can be used both as frontend and backend as well
            • 8
              Powerful
            • 8
              Most Popular Language in the World
            • 7
              Popularized Class-Less Architecture & Lambdas
            • 7
              It's fun
            • 7
              Nice
            • 7
              Versitile
            • 7
              Hard not to use
            • 7
              Its fun and fast
            • 7
              Agile, packages simple to use
            • 7
              Supports lambdas and closures
            • 7
              Love-hate relationship
            • 7
              Photoshop has 3 JS runtimes built in
            • 7
              Evolution of C
            • 6
              1.6K Can be used on frontend/backend
            • 6
              Client side JS uses the visitors CPU to save Server Res
            • 6
              It let's me use Babel & Typescript
            • 6
              Easy to make something
            • 6
              Can be used on frontend/backend/Mobile/create PRO Ui
            • 5
              Promise relationship
            • 5
              Stockholm Syndrome
            • 5
              Function expressions are useful for callbacks
            • 5
              Scope manipulation
            • 5
              Everywhere
            • 5
              Client processing
            • 5
              Clojurescript
            • 5
              What to add
            • 4
              Because it is so simple and lightweight
            • 4
              Only Programming language on browser
            • 1
              Test2
            • 1
              Easy to learn
            • 1
              Easy to understand
            • 1
              Not the best
            • 1
              Hard to learn
            • 1
              Subskill #4
            • 1
              Test
            • 0
              Hard 彤
            CONS OF JAVASCRIPT
            • 22
              A constant moving target, too much churn
            • 20
              Horribly inconsistent
            • 15
              Javascript is the New PHP
            • 9
              No ability to monitor memory utilitization
            • 8
              Shows Zero output in case of ANY error
            • 7
              Thinks strange results are better than errors
            • 6
              Can be ugly
            • 3
              No GitHub
            • 2
              Slow

            related JavaScript posts

            Zach Holman

            Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.

            But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.

            But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.

            Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.

            See more
            Conor Myhrvold
            Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 10M views

            How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

            Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

            Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

            https://eng.uber.com/distributed-tracing/

            (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

            Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

            See more
            Git logo

            Git

            289.7K
            174.1K
            6.6K
            Fast, scalable, distributed revision control system
            289.7K
            174.1K
            + 1
            6.6K
            PROS OF GIT
            • 1.4K
              Distributed version control system
            • 1.1K
              Efficient branching and merging
            • 959
              Fast
            • 845
              Open source
            • 726
              Better than svn
            • 368
              Great command-line application
            • 306
              Simple
            • 291
              Free
            • 232
              Easy to use
            • 222
              Does not require server
            • 27
              Distributed
            • 22
              Small & Fast
            • 18
              Feature based workflow
            • 15
              Staging Area
            • 13
              Most wide-spread VSC
            • 11
              Role-based codelines
            • 11
              Disposable Experimentation
            • 7
              Frictionless Context Switching
            • 6
              Data Assurance
            • 5
              Efficient
            • 4
              Just awesome
            • 3
              Github integration
            • 3
              Easy branching and merging
            • 2
              Compatible
            • 2
              Flexible
            • 2
              Possible to lose history and commits
            • 1
              Rebase supported natively; reflog; access to plumbing
            • 1
              Light
            • 1
              Team Integration
            • 1
              Fast, scalable, distributed revision control system
            • 1
              Easy
            • 1
              Flexible, easy, Safe, and fast
            • 1
              CLI is great, but the GUI tools are awesome
            • 1
              It's what you do
            • 0
              Phinx
            CONS OF GIT
            • 16
              Hard to learn
            • 11
              Inconsistent command line interface
            • 9
              Easy to lose uncommitted work
            • 7
              Worst documentation ever possibly made
            • 5
              Awful merge handling
            • 3
              Unexistent preventive security flows
            • 3
              Rebase hell
            • 2
              When --force is disabled, cannot rebase
            • 2
              Ironically even die-hard supporters screw up badly
            • 1
              Doesn't scale for big data

            related Git posts

            Simon Reymann
            Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 9.2M views

            Our whole DevOps stack consists of the following tools:

            • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
            • Respectively Git as revision control system
            • SourceTree as Git GUI
            • Visual Studio Code as IDE
            • CircleCI for continuous integration (automatize development process)
            • Prettier / TSLint / ESLint as code linter
            • SonarQube as quality gate
            • Docker as container management (incl. Docker Compose for multi-container application management)
            • VirtualBox for operating system simulation tests
            • Kubernetes as cluster management for docker containers
            • Heroku for deploying in test environments
            • nginx as web server (preferably used as facade server in production environment)
            • SSLMate (using OpenSSL) for certificate management
            • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
            • PostgreSQL as preferred database system
            • Redis as preferred in-memory database/store (great for caching)

            The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

            • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
            • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
            • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
            • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
            • Scalability: All-in-one framework for distributed systems.
            • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
            See more
            Tymoteusz Paul
            Devops guy at X20X Development LTD · | 23 upvotes · 8.2M views

            Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

            It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

            I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

            We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

            If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

            The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

            Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

            See more