Convert Figma logo to code with AI

spotify logoscio

A Scala API for Apache Beam and Google Cloud Dataflow.

2,553
512
2,553
142

Top Related Projects

7,828

Apache Beam is a unified programming model for Batch and Streaming data processing.

2,104

TFX is an end-to-end platform for deploying production ML pipelines

Quick Overview

Scio is a Scala API for Apache Beam and Google Cloud Dataflow, inspired by Apache Spark and Scalding. It provides a high-level, type-safe API for distributed data processing, making it easier to write and maintain data pipelines for batch and streaming workloads.

Pros

  • Strong type safety and compile-time checks
  • Seamless integration with Google Cloud Platform services
  • Rich set of built-in transformations and IO connectors
  • Supports both batch and streaming processing

Cons

  • Steeper learning curve for developers not familiar with Scala
  • Limited community support compared to more popular frameworks like Apache Spark
  • Dependency on Apache Beam, which may introduce complexity in some scenarios
  • Performance may not be as optimized as native Apache Beam pipelines in certain cases

Code Examples

  1. Reading from a text file and counting words:
import com.spotify.scio._

ScioContext().run {
  _.textFile("input.txt")
    .flatMap(_.split("\\W+"))
    .countByValue
    .map(kv => s"${kv._1}: ${kv._2}")
    .saveAsTextFile("output")
}
  1. Processing data from BigQuery:
import com.spotify.scio._
import com.spotify.scio.bigquery._

ScioContext().run {
  _.bigQueryTable("projectId:datasetId.tableId")
    .map(row => row.getString("column_name"))
    .saveAsBigQuery("projectId:datasetId.output_table")
}
  1. Windowed aggregation in streaming mode:
import com.spotify.scio._
import org.apache.beam.sdk.transforms.windowing.{FixedWindows, Window}
import org.joda.time.Duration

ScioContext().runWithContext { sc =>
  sc.streamingCollectionOf[String]
    .withFixedWindows(Duration.standardMinutes(5))
    .countByValue
    .map(kv => s"${kv._1}: ${kv._2}")
    .saveAsTextFile("windowed_output")
}

Getting Started

To start using Scio, add the following dependency to your build.sbt file:

libraryDependencies += "com.spotify" %% "scio-core" % "0.11.4"

For BigQuery support, also include:

libraryDependencies += "com.spotify" %% "scio-bigquery" % "0.11.4"

Then, import the necessary classes and create a ScioContext to start building your data pipeline:

import com.spotify.scio._

object MyPipeline {
  def main(args: Array[String]): Unit = {
    val (sc, args) = ContextAndArgs(args)
    // Your pipeline code here
    sc.run()
  }
}

Competitor Comparisons

7,828

Apache Beam is a unified programming model for Batch and Streaming data processing.

Pros of Beam

  • Supports multiple programming languages (Java, Python, Go)
  • Offers a wider range of runners for various execution engines
  • Has a larger community and more extensive documentation

Cons of Beam

  • Steeper learning curve due to more complex API
  • Can be more verbose and require more boilerplate code
  • May have slower development cycles for certain use cases

Code Comparison

Scio (Scala):

sc.textFile("input.txt")
  .flatMap(_.split("\\s+"))
  .countByValue()
  .map(kv => s"${kv._1}: ${kv._2}")
  .saveAsTextFile("output")

Beam (Java):

p.apply(TextIO.read().from("input.txt"))
 .apply(FlatMapElements.into(TypeDescriptors.strings())
   .via((String word) -> Arrays.asList(word.split("\\s+"))))
 .apply(Count.perElement())
 .apply(MapElements.into(TypeDescriptors.strings())
   .via((KV<String, Long> wordCount) ->
     wordCount.getKey() + ": " + wordCount.getValue()))
 .apply(TextIO.write().to("output"));

Both Scio and Beam are powerful frameworks for data processing, with Scio offering a more concise Scala API built on top of Beam. Scio provides a simpler development experience for Scala developers, while Beam offers broader language support and more execution options. The choice between them often depends on the specific project requirements and team expertise.

2,104

TFX is an end-to-end platform for deploying production ML pipelines

Pros of TFX

  • Comprehensive end-to-end ML pipeline framework
  • Tight integration with TensorFlow ecosystem
  • Robust production-ready components for data validation, model analysis, and serving

Cons of TFX

  • Steeper learning curve due to complexity
  • Less flexibility for non-TensorFlow workflows
  • Heavier resource requirements for small-scale projects

Code Comparison

TFX example:

import tfx
from tfx.components import CsvExampleGen

example_gen = CsvExampleGen(input_base='/path/to/data')

Scio example:

import com.spotify.scio._

val (sc, args) = ContextAndArgs(argv)
val data = sc.textFile("input.txt")

Key Differences

  • TFX focuses on end-to-end ML pipelines, while Scio is a general-purpose data processing framework
  • TFX is Python-based and tightly coupled with TensorFlow, whereas Scio is Scala-based and built on Apache Beam
  • TFX provides pre-built components for ML workflows, while Scio offers more flexibility for custom data processing tasks

Use Cases

  • TFX: Large-scale ML projects, especially those using TensorFlow
  • Scio: Data processing and analytics tasks, particularly in Scala-based environments

Community and Support

  • TFX: Larger community, extensive documentation, and Google backing
  • Scio: Smaller but active community, well-maintained by Spotify

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Scio

Continuous Integration codecov.io GitHub license Maven Central Scaladoc Scala Steward badge

Scio Logo

Ecclesiastical Latin IPA: /ˈʃi.o/, [ˈʃiː.o], [ˈʃi.i̯o] Verb: I can, know, understand, have knowledge.

Scio is a Scala API for Apache Beam and Google Cloud Dataflow inspired by Apache Spark and Scalding.

Features

  • Scala API close to that of Spark and Scalding core APIs
  • Unified batch and streaming programming model
  • Fully managed service*
  • Integration with Google Cloud products: Cloud Storage, BigQuery, Pub/Sub, Datastore, Bigtable
  • Avro, Cassandra, Elasticsearch, gRPC, JDBC, neo4j, Parquet, Redis, TensorFlow IOs
  • Interactive mode with Scio REPL
  • Type safe BigQuery
  • Integration with Algebird and Breeze
  • Pipeline orchestration with Scala Futures
  • Distributed cache

* provided by Google Cloud Dataflow

Quick Start

Download and install the Java Development Kit (JDK) version 8.

Install sbt.

Use our giter8 template to quickly create a new Scio job repository:

sbt new spotify/scio.g8

Switch to the new repo (default scio-job) and build it:

cd scio-job
sbt stage

Run the included word count example:

target/universal/stage/bin/scio-job --output=wc

List result files and inspect content:

ls -l wc
cat wc/part-00000-of-00004.txt

Documentation

Getting Started is the best place to start with Scio. If you are new to Apache Beam and distributed data processing, check out the Beam Programming Guide first for a detailed explanation of the Beam programming model and concepts. If you have experience with other Scala data processing libraries, check out this comparison between Scio, Scalding and Spark.

Example Scio pipelines and tests can be found under scio-examples. A lot of them are direct ports from Beam's Java examples. See this page for some of them with side-by-side explanation. Also see Big Data Rosetta Code for common data processing code snippets in Scio, Scalding and Spark.

Artifacts

Scio includes the following artifacts:

  • scio-avro: add-on for Avro, can also be used standalone
  • scio-cassandra*: add-ons for Cassandra
  • scio-core: core library
  • scio-elasticsearch*: add-ons for Elasticsearch
  • scio-extra: extra utilities for working with collections, Breeze, etc., best effort support
  • scio-google-cloud-platform: add-on for Google Cloud IO's: BigQuery, Bigtable, Pub/Sub, Datastore, Spanner
  • scio-grpc: add-on for gRPC service calls
  • scio-jdbc: add-on for JDBC IO
  • scio-neo4j: add-on for Neo4J IO
  • scio-parquet: add-on for Parquet
  • scio-redis: add-on for Redis
  • scio-repl: extension of the Scala REPL with Scio specific operations
  • scio-smb: add-on for Sort Merge Bucket operations
  • scio-tensorflow: add-on for TensorFlow TFRecords IO and prediction
  • scio-test: all following test utilities. Add to your project as a "test" dependency
    • scio-test-core: test core utilities
    • scio-test-google-cloud-platform: test utilities for Google Cloud IO's
    • scio-test-parquet: test utilities for Parquet

License

Copyright 2024 Spotify AB.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0