Convert Figma logo to code with AI

apache logoflink

Apache Flink

23,783
13,230
23,783
1,225

Top Related Projects

39,274

Apache Spark - A unified analytics engine for large-scale data processing

7,760

Apache Beam is a unified programming model for Batch and Streaming data processing.

14,635

Apache Hadoop

6,587

Apache Storm

4,733

Apache NiFi

28,317

Mirror of Apache Kafka

Quick Overview

Apache Flink is an open-source, distributed stream processing and batch processing framework. It provides high-throughput, low-latency data processing capabilities for both bounded and unbounded datasets. Flink is designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.

Pros

  • Unified stream and batch processing
  • Exactly-once processing semantics
  • High performance and scalability
  • Rich ecosystem of connectors and libraries

Cons

  • Steep learning curve for beginners
  • Complex configuration and tuning for optimal performance
  • Limited support for certain programming languages compared to some competitors
  • Resource-intensive for small-scale applications

Code Examples

  1. Simple WordCount example:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

DataStream<String> text = env.fromElements(
    "To be, or not to be,--that is the question:--",
    "Whether 'tis nobler in the mind to suffer"
);

DataStream<Tuple2<String, Integer>> counts = text
    .flatMap(new Tokenizer())
    .keyBy(value -> value.f0)
    .sum(1);

counts.print();

env.execute("Streaming WordCount");
  1. Windowed aggregation:
DataStream<Tuple2<String, Integer>> input = ...

input
    .keyBy(value -> value.f0)
    .window(TumblingEventTimeWindows.of(Time.minutes(5)))
    .sum(1)
    .print();
  1. Connecting two streams:
DataStream<Integer> stream1 = ...
DataStream<String> stream2 = ...

stream1.connect(stream2)
    .flatMap(new CoFlatMapFunction<Integer, String, String>() {
        @Override
        public void flatMap1(Integer value, Collector<String> out) {
            out.collect(value.toString());
        }

        @Override
        public void flatMap2(String value, Collector<String> out) {
            out.collect(value);
        }
    })
    .print();

Getting Started

  1. Add Flink dependencies to your project:
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-java</artifactId>
    <version>1.15.0</version>
</dependency>
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-streaming-java</artifactId>
    <version>1.15.0</version>
</dependency>
  1. Create a StreamExecutionEnvironment:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  1. Define your data sources, transformations, and sinks:
DataStream<String> input = env.fromElements("Hello", "World");
DataStream<String> processed = input.map(String::toUpperCase);
processed.print();
  1. Execute the Flink job:
env.execute("My First Flink Job");

Competitor Comparisons

39,274

Apache Spark - A unified analytics engine for large-scale data processing

Pros of Spark

  • Mature ecosystem with extensive libraries and integrations
  • Better suited for batch processing and machine learning tasks
  • Easier to learn and use, especially for data scientists

Cons of Spark

  • Higher latency for real-time stream processing
  • Less efficient memory management for large-scale data processing
  • More resource-intensive, requiring more hardware for similar performance

Code Comparison

Spark (Scala):

val wordCounts = lines.flatMap(_.split(" "))
                      .map((_, 1))
                      .reduceByKey(_ + _)

Flink (Java):

DataStream<Tuple2<String, Integer>> wordCounts = text
    .flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
        public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
            for (String word : value.split(" ")) {
                out.collect(new Tuple2<>(word, 1));
            }
        }
    })
    .keyBy(0)
    .sum(1);

Both examples show word count implementations, highlighting Spark's more concise syntax in Scala compared to Flink's more verbose Java code. However, Flink offers similar conciseness when using Scala or its Table API.

7,760

Apache Beam is a unified programming model for Batch and Streaming data processing.

Pros of Beam

  • Supports multiple programming languages (Java, Python, Go)
  • Provides a unified programming model for batch and streaming
  • Offers portability across various execution engines (Flink, Spark, etc.)

Cons of Beam

  • Steeper learning curve due to its abstraction layer
  • May have performance overhead compared to native Flink applications
  • Less mature ecosystem and community support than Flink

Code Comparison

Beam (Python):

import apache_beam as beam

with beam.Pipeline() as p:
    (p | beam.Create([1, 2, 3, 4, 5])
       | beam.Map(lambda x: x * 2)
       | beam.io.WriteToText('output.txt'))

Flink (Java):

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.fromElements(1, 2, 3, 4, 5)
   .map(x -> x * 2)
   .writeAsText("output.txt");
env.execute();

Both examples demonstrate a simple data processing pipeline, but Beam's approach is more abstract and portable across different execution engines, while Flink's code is more specific to its runtime environment.

14,635

Apache Hadoop

Pros of Hadoop

  • More mature ecosystem with extensive tooling and support
  • Better suited for large-scale batch processing and data warehousing
  • Stronger support for unstructured data storage with HDFS

Cons of Hadoop

  • Slower processing speed compared to Flink's stream processing
  • More complex setup and configuration
  • Less suitable for real-time data processing and analytics

Code Comparison

Hadoop MapReduce example:

public class WordCount extends Configured implements Tool {
    public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();
        // ... (mapper implementation)
    }
    // ... (reducer and main method)
}

Flink DataStream API example:

public class WordCount {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStream<String> text = env.readTextFile("input.txt");
        DataStream<Tuple2<String, Integer>> counts = text
            .flatMap(new Tokenizer())
            .keyBy(value -> value.f0)
            .sum(1);
        counts.print();
        env.execute("Streaming WordCount");
    }
    // ... (Tokenizer implementation)
}

The code examples showcase the difference in approach between Hadoop's MapReduce paradigm and Flink's stream processing model. Hadoop requires more boilerplate code, while Flink offers a more concise and intuitive API for stream processing tasks.

6,587

Apache Storm

Pros of Storm

  • Simpler to set up and use, with a lower learning curve
  • Better suited for real-time processing of individual events
  • More mature ecosystem with a wider range of connectors and integrations

Cons of Storm

  • Less efficient for batch processing and complex stateful computations
  • Limited support for exactly-once processing semantics
  • Lower throughput compared to Flink, especially for large-scale data processing

Code Comparison

Storm topology definition:

TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RandomSentenceSpout(), 5);
builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");
builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));

Flink job definition:

DataStream<String> text = env.addSource(new FlinkKafkaConsumer<>("topic", new SimpleStringSchema(), properties));
DataStream<Tuple2<String, Integer>> wordCounts = text
    .flatMap(new Tokenizer())
    .keyBy(value -> value.f0)
    .sum(1);
wordCounts.print();

Both Storm and Flink are distributed stream processing systems, but Flink offers more advanced features for stateful processing and batch operations. Storm is simpler and better for pure real-time event processing, while Flink provides higher throughput and more flexibility for complex data processing pipelines.

4,733

Apache NiFi

Pros of NiFi

  • User-friendly web-based interface for designing and managing data flows
  • Extensive support for various data formats and protocols out-of-the-box
  • Built-in data provenance and lineage tracking

Cons of NiFi

  • Less suitable for complex stream processing and analytics compared to Flink
  • Limited support for stateful processing and windowing operations
  • May have higher latency for real-time processing scenarios

Code Comparison

NiFi (using NiFi Expression Language):

${filename:substringBeforeLast('.'):trim()}

Flink (using Java API):

DataStream<String> stream = env.readTextFile("input.txt");
stream.map(value -> value.toLowerCase())
      .filter(value -> value.startsWith("a"))
      .print();

NiFi focuses on visual data flow design and management, while Flink excels in complex stream processing and analytics. NiFi's code typically involves configuring processors and using expression language, whereas Flink uses programming APIs for defining data processing pipelines.

NiFi is better suited for ETL tasks and data ingestion, offering a more accessible interface for non-developers. Flink, on the other hand, provides powerful stream processing capabilities and is more appropriate for real-time analytics and complex event processing scenarios.

28,317

Mirror of Apache Kafka

Pros of Kafka

  • Highly scalable and fault-tolerant distributed streaming platform
  • Excellent for high-throughput, real-time data ingestion and processing
  • Strong ecosystem and wide industry adoption

Cons of Kafka

  • Limited built-in stream processing capabilities
  • Steeper learning curve for complex use cases
  • Requires additional tools for comprehensive data processing pipelines

Code Comparison

Kafka (Producer example):

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);

Flink (DataStream example):

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> text = env.readTextFile("file:///path/to/file");
DataStream<Tuple2<String, Integer>> counts = text
    .flatMap(new Tokenizer())
    .keyBy(0)
    .sum(1);

While Kafka excels in distributed messaging and data ingestion, Flink offers more comprehensive stream processing capabilities. Kafka is often used as a data source or sink for Flink applications, combining their strengths in distributed systems and stream processing respectively.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Apache Flink

Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.

Learn more about Flink at https://flink.apache.org/

Features

  • A streaming-first runtime that supports both batch processing and data streaming programs

  • Elegant and fluent APIs in Java and Scala

  • A runtime that supports very high throughput and low event latency at the same time

  • Support for event time and out-of-order processing in the DataStream API, based on the Dataflow Model

  • Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time)

  • Fault-tolerance with exactly-once processing guarantees

  • Natural back-pressure in streaming programs

  • Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)

  • Built-in support for iterative programs (BSP) in the DataSet (batch) API

  • Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms

  • Compatibility layers for Apache Hadoop MapReduce

  • Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem

Streaming Example

case class WordWithCount(word: String, count: Long)

val text = env.socketTextStream(host, port, '\n')

val windowCounts = text.flatMap { w => w.split("\\s") }
  .map { w => WordWithCount(w, 1) }
  .keyBy("word")
  .window(TumblingProcessingTimeWindow.of(Time.seconds(5)))
  .sum("count")

windowCounts.print()

Batch Example

case class WordWithCount(word: String, count: Long)

val text = env.readTextFile(path)

val counts = text.flatMap { w => w.split("\\s") }
  .map { w => WordWithCount(w, 1) }
  .groupBy("word")
  .sum("count")

counts.writeAsCsv(outputPath)

Building Apache Flink from Source

Prerequisites for building Flink:

  • Unix-like environment (we use Linux, Mac OS X, Cygwin, WSL)
  • Git
  • Maven (we require version 3.8.6)
  • Java 8 or 11 (Java 9 or 10 may work)
git clone https://github.com/apache/flink.git
cd flink
./mvnw clean package -DskipTests # this will take up to 10 minutes

Flink is now installed in build-target.

Developing Flink

The Flink committers use IntelliJ IDEA to develop the Flink codebase. We recommend IntelliJ IDEA for developing projects that involve Scala code.

Minimal requirements for an IDE are:

  • Support for Java and Scala (also mixed projects)
  • Support for Maven with Java and Scala

IntelliJ IDEA

The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development.

Check out our Setting up IntelliJ guide for details.

Eclipse Scala IDE

NOTE: From our experience, this setup does not work with Flink due to deficiencies of the old Eclipse version bundled with Scala IDE 3.0.3 or due to version incompatibilities with the bundled Scala version in Scala IDE 4.4.1.

We recommend to use IntelliJ instead (see above)

Support

Don’t hesitate to ask!

Contact the developers and community on the mailing lists if you need any help.

Open an issue if you find a bug in Flink.

Documentation

The documentation of Apache Flink is located on the website: https://flink.apache.org or in the docs/ directory of the source code.

Fork and Contribute

This is an active open-source project. We are always open to people who want to use the system or contribute to it. Contact us if you are looking for implementation tasks that fit your skills. This article describes how to contribute to Apache Flink.

Externalized Connectors

Most Flink connectors have been externalized to individual repos under the Apache Software Foundation:

About

Apache Flink is an open source project of The Apache Software Foundation (ASF). The Apache Flink project originated from the Stratosphere research project.