An open API service providing repository metadata for many open source software ecosystems.

GitHub topics: amazon-msk

averemee-si/aws-secrets-manager-kafka

AWS Secrets Manager Config Provider for Apache Kafka

Language: Java - Size: 84 KB - Last synced at: 22 days ago - Pushed at: 22 days ago - Stars: 0 - Forks: 0

build-on-aws/prioritizing-event-processing-with-apache-kafka

Technical solution to implement event processing prioritization with Apache Kafke using the concept of buckets.

Language: Java - Size: 585 KB - Last synced at: 29 days ago - Pushed at: 5 months ago - Stars: 28 - Forks: 8

build-on-aws/building-apache-kafka-connectors

Sample code that shows the important aspects of developing custom connectors for Kafka Connect. It provides the resources for building, deploying, and running the code on-premises using Docker, as well as running the code in the cloud.

Language: Java - Size: 57.6 KB - Last synced at: 18 days ago - Pushed at: 11 months ago - Stars: 54 - Forks: 14

aws-samples/aws-msk-cdc-data-pipeline-with-debezium

Data Pipeline for CDC data from MySQL DB to Amazon S3 through Amazon MSK using Amazon MSK Connect (Debezium).

Language: Python - Size: 442 KB - Last synced at: 2 months ago - Pushed at: 2 months ago - Stars: 2 - Forks: 2

aws-samples/aws-cdk-managed-elkk

Managed ELKK stack implemented with the AWS CDK

Language: Python - Size: 25.4 MB - Last synced at: 23 days ago - Pushed at: 10 months ago - Stars: 42 - Forks: 19

sve2-2021ss/mom-ammer

Demo event analytics platform based on Apache Kafka (Confluent).

Language: Python - Size: 2.7 MB - Last synced at: almost 2 years ago - Pushed at: almost 4 years ago - Stars: 0 - Forks: 0

amazon-archives/cdc-neo4j-msk-neptune 📦

After you migrate from an existing graph database to Amazon Neptune, you might want to capture and process changed data in real time. Continuous replication of databases using the change data capture technique allows you to unlock your data and make it available to other systems for use cases such as distributed data processing, building an enterprise data lake, and modernizing your existing database. In the previous post of this series, we demonstrated with an example solution, how to perform automated migration from Neo4j to Amazon Neptune. If you are looking beyond one-time migration and want to keep both databases in sync you might want to run an ongoing replication using the change data capture technique.

Language: JavaScript - Size: 755 KB - Last synced at: about 2 years ago - Pushed at: almost 5 years ago - Stars: 4 - Forks: 2

create-speech-to-text-pipeline/pipeline

A tool that can be deployed to process posting and receiving text and audio files from and into a data lake, apply transformation in a distributed manner, and load it into a warehouse in a suitable format to train a speech-to-text model

Language: Jupyter Notebook - Size: 5.01 MB - Last synced at: about 2 years ago - Pushed at: over 2 years ago - Stars: 0 - Forks: 8

garystafford/kstreams-on-msk

Kafka KStreams example using Amazon MSK with IAM Auth

Language: Java - Size: 141 KB - Last synced at: over 1 year ago - Pushed at: almost 3 years ago - Stars: 1 - Forks: 0

garystafford/msk-perf-test

Amazon MSK Performance Testing

Size: 43.9 KB - Last synced at: over 1 year ago - Pushed at: over 2 years ago - Stars: 1 - Forks: 0