diff --git a/.gitignore b/.gitignore index 0a017d0b..bbd7aecb 100644 --- a/.gitignore +++ b/.gitignore @@ -1,6 +1,3 @@ -# Ignore bundler config. -/.bundle - Gemfile.lock ext/ports ext/tmp @@ -9,6 +6,3 @@ ext/librdkafka.* .yardoc doc coverage -vendor -.idea/ -out/ diff --git a/CHANGELOG.md b/CHANGELOG.md index 92d087fd..f29765e2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,198 +1,99 @@ -# Rdkafka Changelog - -## 0.17.0 (Unreleased) -- [Enhancement] Update `librdkafka` to `2.4.0` -- [Feature] Add `#seek_by` to be able to seek for a message by topic, partition and offset (zinahia) -- [Fix] Switch to local release of librdkafka to mitigate its unavailability. - -## 0.16.1 (2024-07-10) -- [Fix] Switch to local release of librdkafka to mitigate its unavailability. - -## 0.16.0 (2024-06-13) -- **[Breaking]** Retire support for Ruby 2.7. -- **[Breaking]** Messages without headers returned by `#poll` contain frozen empty hash. -- **[Breaking]** `HashWithSymbolKeysTreatedLikeStrings` has been removed so headers are regular hashes with string keys. -- **[Feature]** Support incremental config describe + alter API. -- **[Feature]** Oauthbearer token refresh callback (bruce-szalwinski-he) -- **[Feature]** Provide ability to use topic config on a producer for custom behaviors per dispatch. -- [Enhancement] Use topic config reference cache for messages production to prevent topic objects allocation with each message. -- [Enhancement] Provide `Rrdkafka::Admin#describe_errors` to get errors descriptions (mensfeld) -- [Enhancement] Replace time poll based wait engine with an event based to improve response times on blocking operations and wait (nijikon + mensfeld) -- [Enhancement] Allow for usage of the second regex engine of librdkafka by setting `RDKAFKA_DISABLE_REGEX_EXT` during build (mensfeld) -- [Enhancement] name polling Thread as `rdkafka.native_kafka#` (nijikon) -- [Enhancement] Save two objects on message produced and lower CPU usage on message produced with small improvements. -- [Change] Allow for native kafka thread operations deferring and manual start for consumer, producer and admin. -- [Change] The `wait_timeout` argument in `AbstractHandle.wait` method is deprecated and will be removed in future versions without replacement. We don't rely on it's value anymore (nijikon) -- [Fix] Background logger stops working after forking causing memory leaks (mensfeld) -- [Fix] Fix bogus case/when syntax. Levels 1, 2, and 6 previously defaulted to UNKNOWN (jjowdy) - -## 0.15.2 (2024-07-10) -- [Fix] Switch to local release of librdkafka to mitigate its unavailability. - -## 0.15.1 (2024-01-30) -- [Enhancement] Provide support for Nix OS (alexandriainfantino) -- [Enhancement] Replace `rd_kafka_offset_store` with `rd_kafka_offsets_store` (mensfeld) -- [Enhancement] Alias `topic_name` as `topic` in the delivery report (mensfeld) -- [Enhancement] Provide `label` producer handler and report reference for improved traceability (mensfeld) -- [Enhancement] Include the error when invoking `create_result` on producer handle (mensfeld) -- [Enhancement] Skip intermediate array creation on delivery report callback execution (one per message) (mensfeld). -- [Enhancement] Report `-1` instead of `nil` in case `partition_count` failure (mensfeld). -- [Fix] Fix return type on `#rd_kafka_poll` (mensfeld) -- [Fix] `uint8_t` does not exist on Apple Silicon (mensfeld) -- [Fix] Missing ACL `RD_KAFKA_RESOURCE_BROKER` constant reference (mensfeld) -- [Fix] Partition cache caches invalid nil result for `PARTITIONS_COUNT_TTL` (mensfeld) -- [Change] Rename `matching_acl_pattern_type` to `matching_acl_resource_pattern_type` to align the whole API (mensfeld) - -## 0.15.0 (2023-12-03) -- **[Feature]** Add `Admin#metadata` (mensfeld) -- **[Feature]** Add `Admin#create_partitions` (mensfeld) -- **[Feature]** Add `Admin#delete_group` utility (piotaixr) -- **[Feature]** Add Create and Delete ACL Feature To Admin Functions (vgnanasekaran) -- **[Feature]** Support `#assignment_lost?` on a consumer to check for involuntary assignment revocation (mensfeld) -- [Enhancement] Expose alternative way of managing consumer events via a separate queue (mensfeld) -- [Enhancement] **Bump** librdkafka to 2.3.0 (mensfeld) -- [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations (mensfeld) -- [Change] Use `SecureRandom.uuid` instead of `random` for test consumer groups (mensfeld) - -## 0.14.1 (2024-07-10) +# 0.10.1 - [Fix] Switch to local release of librdkafka to mitigate its unavailability. -## 0.14.0 (2023-11-21) -- [Enhancement] Add `raise_response_error` flag to the `Rdkafka::AbstractHandle`. -- [Enhancement] Allow for setting `statistics_callback` as nil to reset predefined settings configured by a different gem (mensfeld) -- [Enhancement] Get consumer position (thijsc & mensfeld) -- [Enhancement] Provide `#purge` to remove any outstanding requests from the producer (mensfeld) -- [Enhancement] Update `librdkafka` to `2.2.0` (mensfeld) -- [Enhancement] Introduce producer partitions count metadata cache (mensfeld) -- [Enhancement] Increase metadata timeout request from `250 ms` to `2000 ms` default to allow for remote cluster operations via `rdkafka-ruby` (mensfeld) -- [Enhancement] Introduce `#name` for producers and consumers (mensfeld) -- [Enhancement] Include backtrace in non-raised binded errors (mensfeld) -- [Fix] Reference to Opaque is not released when Admin, Consumer or Producer is closed (mensfeld) -- [Fix] Trigger `#poll` on native kafka creation to handle oauthbearer cb (mensfeld) -- [Fix] `#flush` does not handle the timeouts errors by making it return `true` if all flushed or `false` if failed. We do **not** raise an exception here to keep it backwards compatible (mensfeld) -- [Change] Remove support for Ruby 2.6 due to it being EOL and WeakMap incompatibilities (mensfeld) -- [Change] Update Kafka Docker with Confluent KRaft (mensfeld) -- [Change] Update librdkafka repo reference from edenhill to confluentinc (mensfeld) - -## 0.13.0 (2023-07-24) -- Support cooperative sticky partition assignment in the rebalance callback (methodmissing) -- Support both string and symbol header keys (ColinDKelley) -- Handle tombstone messages properly (kgalieva) -- Add topic name to delivery report (maeve) -- Allow string partitioner config (mollyegibson) -- Fix documented type for DeliveryReport#error (jimmydo) -- Bump librdkafka to 2.0.2 (lmaia) -- Use finalizers to cleanly exit producer and admin (thijsc) -- Lock access to the native kafka client (thijsc) -- Fix potential race condition in multi-threaded producer (mensfeld) -- Fix leaking FFI resources in specs (mensfeld) -- Improve specs stability (mensfeld) -- Make metadata request timeout configurable (mensfeld) -- call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc) - -## 0.12.0 (2022-06-17) -- Bumps librdkafka to 1.9.0 -- Fix crash on empty partition key (mensfeld) -- Pass the delivery handle to the callback (gvisokinskas) - -## 0.11.0 (2021-11-17) -- Upgrade librdkafka to 1.8.2 -- Bump supported minimum Ruby version to 2.6 -- Better homebrew path detection - -## 0.10.0 (2021-09-07) -- Upgrade librdkafka to 1.5.0 -- Add error callback config - -## 0.9.0 (2021-06-23) -- Fixes for Ruby 3.0 -- Allow any callable object for callbacks (gremerritt) -- Reduce memory allocations in Rdkafka::Producer#produce (jturkel) -- Use queue as log callback to avoid unsafe calls from trap context (breunigs) -- Allow passing in topic configuration on create_topic (dezka) -- Add each_batch method to consumer (mgrosso) - -## 0.8.1 (2020-12-07) -- Fix topic_flag behaviour and add tests for Metadata (geoff2k) -- Add topic admin interface (geoff2k) -- Raise an exception if @native_kafka is nil (geoff2k) -- Option to use zstd compression (jasonmartens) - -## 0.8.0 (2020-06-02) -- Upgrade librdkafka to 1.4.0 -- Integrate librdkafka metadata API and add partition_key (by Adithya-copart) -- Ruby 2.7 compatibility fix (by Geoff Thé)A -- Add error to delivery report (by Alex Stanovsky) -- Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake) -- Allow use of Rake 13.x and up (by Tomasz Pajor) - -## 0.7.0 (2019-09-21) -- Bump librdkafka to 1.2.0 (by rob-as) -- Allow customizing the wait time for delivery report availability (by mensfeld) - -## 0.6.0 (2019-07-23) -- Bump librdkafka to 1.1.0 (by Chris Gaffney) -- Implement seek (by breunigs) - -## 0.5.0 (2019-04-11) -- Bump librdkafka to 1.0.0 (by breunigs) -- Add cluster and member information (by dmexe) -- Support message headers for consumer & producer (by dmexe) -- Add consumer rebalance listener (by dmexe) -- Implement pause/resume partitions (by dmexe) - -## 0.4.2 (2019-01-12) -- Delivery callback for producer -- Document list param of commit method -- Use default Homebrew openssl location if present -- Consumer lag handles empty topics -- End iteration in consumer when it is closed -- Add support for storing message offsets -- Add missing runtime dependency to rake - -## 0.4.1 (2018-10-19) -- Bump librdkafka to 0.11.6 - -## 0.4.0 (2018-09-24) -- Improvements in librdkafka archive download -- Add global statistics callback -- Use Time for timestamps, potentially breaking change if you +# 0.10.0 +* Upgrade librdkafka to 1.5.0 +* Add error callback config + +# 0.9.0 +* Fixes for Ruby 3.0 +* Allow any callable object for callbacks (gremerritt) +* Reduce memory allocations in Rdkafka::Producer#produce (jturkel) +* Use queue as log callback to avoid unsafe calls from trap context (breunigs) +* Allow passing in topic configuration on create_topic (dezka) +* Add each_batch method to consumer (mgrosso) + +# 0.8.1 +* Fix topic_flag behaviour and add tests for Metadata (geoff2k) +* Add topic admin interface (geoff2k) +* Raise an exception if @native_kafka is nil (geoff2k) +* Option to use zstd compression (jasonmartens) + +# 0.8.0 +* Upgrade librdkafka to 1.4.0 +* Integrate librdkafka metadata API and add partition_key (by Adithya-copart) +* Ruby 2.7 compatibility fix (by Geoff Thé)A +* Add error to delivery report (by Alex Stanovsky) +* Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake) +* Allow use of Rake 13.x and up (by Tomasz Pajor) + +# 0.7.0 +* Bump librdkafka to 1.2.0 (by rob-as) +* Allow customizing the wait time for delivery report availability (by mensfeld) + +# 0.6.0 +* Bump librdkafka to 1.1.0 (by Chris Gaffney) +* Implement seek (by breunigs) + +# 0.5.0 +* Bump librdkafka to 1.0.0 (by breunigs) +* Add cluster and member information (by dmexe) +* Support message headers for consumer & producer (by dmexe) +* Add consumer rebalance listener (by dmexe) +* Implement pause/resume partitions (by dmexe) + +# 0.4.2 +* Delivery callback for producer +* Document list param of commit method +* Use default Homebrew openssl location if present +* Consumer lag handles empty topics +* End iteration in consumer when it is closed +* Add suport for storing message offsets +* Add missing runtime dependency to rake + +# 0.4.1 +* Bump librdkafka to 0.11.6 + +# 0.4.0 +* Improvements in librdkafka archive download +* Add global statistics callback +* Use Time for timestamps, potentially breaking change if you rely on the previous behavior where it returns an integer with the number of milliseconds. -- Bump librdkafka to 0.11.5 -- Implement TopicPartitionList in Ruby so we don't have to keep +* Bump librdkafka to 0.11.5 +* Implement TopicPartitionList in Ruby so we don't have to keep track of native objects. -- Support committing a topic partition list -- Add consumer assignment method +* Support committing a topic partition list +* Add consumer assignment method -## 0.3.5 (2018-01-17) -- Fix crash when not waiting for delivery handles -- Run specs on Ruby 2.5 +# 0.3.5 +* Fix crash when not waiting for delivery handles +* Run specs on Ruby 2.5 -## 0.3.4 (2017-12-05) -- Bump librdkafka to 0.11.3 +# 0.3.4 +* Bump librdkafka to 0.11.3 -## 0.3.3 (2017-10-27) -- Fix bug that prevent display of `RdkafkaError` message +# 0.3.3 +* Fix bug that prevent display of `RdkafkaError` message -## 0.3.2 (2017-10-25) -- `add_topic` now supports using a partition count -- Add way to make errors clearer with an extra message -- Show topics in subscribe error message -- Show partition and topic in query watermark offsets error message +# 0.3.2 +* `add_topic` now supports using a partition count +* Add way to make errors clearer with an extra message +* Show topics in subscribe error message +* Show partition and topic in query watermark offsets error message -## 0.3.1 (2017-10-23) -- Bump librdkafka to 0.11.1 -- Officially support ranges in `add_topic` for topic partition list. -- Add consumer lag calculator +# 0.3.1 +* Bump librdkafka to 0.11.1 +* Officially support ranges in `add_topic` for topic partition list. +* Add consumer lag calculator -## 0.3.0 (2017-10-17) -- Move both add topic methods to one `add_topic` in `TopicPartitionList` -- Add committed offsets to consumer -- Add query watermark offset to consumer +# 0.3.0 +* Move both add topic methods to one `add_topic` in `TopicPartitionList` +* Add committed offsets to consumer +* Add query watermark offset to consumer -## 0.2.0 (2017-10-13) -- Some refactoring and add inline documentation +# 0.2.0 +* Some refactoring and add inline documentation -## 0.1.x (2017-09-10) -- Initial working version including producing and consuming +# 0.1.x +* Initial working version including producing and consuming diff --git a/Gemfile b/Gemfile index be173b20..b4e2a20b 100644 --- a/Gemfile +++ b/Gemfile @@ -1,5 +1,3 @@ -# frozen_string_literal: true - source "https://rubygems.org" gemspec diff --git a/Guardfile b/Guardfile deleted file mode 100644 index dba4f3e9..00000000 --- a/Guardfile +++ /dev/null @@ -1,19 +0,0 @@ -# frozen_string_literal: true - -logger level: :error - -guard :rspec, cmd: "bundle exec rspec --format #{ENV.fetch("FORMAT", "documentation")}" do - require "guard/rspec/dsl" - dsl = Guard::RSpec::Dsl.new(self) - - # Ruby files - ruby = dsl.ruby - dsl.watch_spec_files_for(ruby.lib_files) - watch(%r{^lib/(.+)\.rb}) { |m| "spec/#{m[1]}_spec.rb" } - - # RSpec files - rspec = dsl.rspec - watch(rspec.spec_helper) { rspec.spec_dir } - watch(rspec.spec_support) { rspec.spec_dir } - watch(rspec.spec_files) -end diff --git a/MIT-LICENSE b/LICENSE similarity index 93% rename from MIT-LICENSE rename to LICENSE index 1bf24eac..ac544a55 100644 --- a/MIT-LICENSE +++ b/LICENSE @@ -1,7 +1,6 @@ The MIT License (MIT) -Copyright (c) 2017-2023 Thijs Cadier - 2023, Maciej Mensfeld +Copyright (c) 2017 Thijs Cadier Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index 24ae2c61..97dff7fa 100644 --- a/README.md +++ b/README.md @@ -1,62 +1,36 @@ # Rdkafka -[![Build Status](https://github.com/karafka/rdkafka-ruby/actions/workflows/ci.yml/badge.svg)](https://github.com/karafka/rdkafka-ruby/actions/workflows/ci.yml) +[![Build Status](https://appsignal.semaphoreci.com/badges/rdkafka-ruby/branches/master.svg?style=shields)](https://appsignal.semaphoreci.com/projects/rdkafka-ruby) [![Gem Version](https://badge.fury.io/rb/rdkafka.svg)](https://badge.fury.io/rb/rdkafka) -[![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io) - -> [!NOTE] -> The `rdkafka-ruby` gem was created and developed by [AppSignal](https://www.appsignal.com/). Their impactful contributions have significantly shaped the Ruby Kafka and Karafka ecosystems. For robust monitoring, we highly recommend AppSignal. - ---- +[![Maintainability](https://api.codeclimate.com/v1/badges/ecb1765f81571cccdb0e/maintainability)](https://codeclimate.com/github/appsignal/rdkafka-ruby/maintainability) The `rdkafka` gem is a modern Kafka client library for Ruby based on -[librdkafka](https://github.com/confluentinc/librdkafka/). +[librdkafka](https://github.com/edenhill/librdkafka/). It wraps the production-ready C client using the [ffi](https://github.com/ffi/ffi) -gem and targets Kafka 1.0+ and Ruby versions under security or -active maintenance. We remove a Ruby version from our CI builds when they -become EOL. - -`rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems. - -The most essential pieces of a Kafka client are implemented, and we aim to provide all relevant consumer, producer, and admin APIs. - -## Table of content - -- [Project Scope](#project-scope) -- [Installation](#installation) -- [Usage](#usage) - * [Consuming Messages](#consuming-messages) - * [Producing Messages](#producing-messages) -- [Higher Level Libraries](#higher-level-libraries) - * [Message Processing Frameworks](#message-processing-frameworks) - * [Message Publishing Libraries](#message-publishing-libraries) -- [Forking](#forking) -- [Development](#development) -- [Example](#example) -- [Versions](#versions) +gem and targets Kafka 1.0+ and Ruby 2.4+. -## Project Scope +`rdkafka` was written because we needed a reliable Ruby client for +Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). +We run it in production on very high traffic systems. -While rdkafka-ruby aims to simplify the use of librdkafka in Ruby applications, it's important to understand the limitations of this library: - -- **No Complex Producers/Consumers**: This library does not intend to offer complex producers or consumers. The aim is to stick closely to the functionalities provided by librdkafka itself. - -- **Focus on librdkafka Capabilities**: Features that can be achieved directly in Ruby, without specific needs from librdkafka, are outside the scope of this library. - -- **Existing High-Level Functionalities**: Certain high-level functionalities like producer metadata cache and simple consumer are already part of the library. Although they fall slightly outside the primary goal, they will remain part of the contract, given their existing usage. +This gem only provides a high-level Kafka consumer. If you are running +an older version of Kafka and/or need the legacy simple consumer we +suggest using the [Hermann](https://github.com/reiseburo/hermann) gem. +The most important pieces of a Kafka client are implemented. We're +working towards feature completeness, you can track that here: +https://github.com/appsignal/rdkafka-ruby/milestone/1 ## Installation -When installed, this gem downloads and compiles librdkafka. If you have any problems installing the gem, please open an issue. +This gem downloads and compiles librdkafka when it is installed. If you +have any problems installing the gem please open an issue. ## Usage -Please see the [documentation](https://karafka.io/docs/code/rdkafka-ruby/) for full details on how to use this gem. Below are two quick examples. +See the [documentation](https://www.rubydoc.info/github/appsignal/rdkafka-ruby) for full details on how to use this gem. Two quick examples: -Unless you are seeking specific low-level capabilities, we **strongly** recommend using [Karafka](https://github.com/karafka/karafka) and [WaterDrop](https://github.com/karafka/waterdrop) when working with Kafka. These are higher-level libraries also maintained by us based on rdkafka-ruby. - -### Consuming Messages +### Consuming messages Subscribe to a topic and get messages. Kafka will automatically spread the available partitions over consumers with the same group id. @@ -74,11 +48,11 @@ consumer.each do |message| end ``` -### Producing Messages +### Producing messages -Produce several messages, put the delivery handles in an array, and +Produce a number of messages, put the delivery handles in an array and wait for them before exiting. This way the messages will be batched and -efficiently sent to Kafka. +sent to Kafka in an efficient way. ```ruby config = {:"bootstrap.servers" => "localhost:9092"} @@ -97,54 +71,32 @@ end delivery_handles.each(&:wait) ``` -Note that creating a producer consumes some resources that will not be released until it `#close` is explicitly called, so be sure to call `Config#producer` only as necessary. - -## Higher Level Libraries - -Currently, there are two actively developed frameworks based on `rdkafka-ruby`, that provide higher-level API that can be used to work with Kafka messages and one library for publishing messages. - -### Message Processing Frameworks - -* [Karafka](https://github.com/karafka/karafka) - Ruby and Rails efficient Kafka processing framework. -* [Racecar](https://github.com/zendesk/racecar) - A simple framework for Kafka consumers in Ruby - -### Message Publishing Libraries - -* [WaterDrop](https://github.com/karafka/waterdrop) – Standalone Karafka library for producing Kafka messages. - -## Forking - -When working with `rdkafka-ruby`, it's essential to know that the underlying `librdkafka` library does not support fork-safe operations, even though it is thread-safe. Forking a process after initializing librdkafka clients can lead to unpredictable behavior due to inherited file descriptors and memory states. This limitation requires careful handling, especially in Ruby applications that rely on forking. - -To address this, it's highly recommended to: - -- Never initialize any `rdkafka-ruby` producers or consumers before forking to avoid state corruption. -- Before forking, always close any open producers or consumers if you've opened any. -- Use high-level libraries like [WaterDrop](https://github.com/karafka/waterdrop) and [Karafka](https://github.com/karafka/karafka/), which provide abstractions for handling librdkafka's intricacies. +Note that creating a producer consumes some resources that will not be +released until it `#close` is explicitly called, so be sure to call +`Config#producer` only as necessary. ## Development -Contributors are encouraged to focus on enhancements that align with the core goal of the library. We appreciate contributions but will likely not accept pull requests for features that: - -- Implement functionalities that can achieved using standard Ruby capabilities without changes to the underlying rdkafka-ruby bindings. -- Deviate significantly from the primary aim of providing librdkafka bindings with Ruby-friendly interfaces. - -A Docker Compose file is included to run Kafka. To run that: +A Docker Compose file is included to run Kafka and Zookeeper. To run +that: ``` docker-compose up ``` -Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and compile `librdkafka`. +Run `bundle` and `cd ext && bundle exec rake && cd ..` to download and +compile `librdkafka`. -You can then run `bundle exec rspec` to run the tests. To see rdkafka debug output: +You can then run `bundle exec rspec` to run the tests. To see rdkafka +debug output: ``` DEBUG_PRODUCER=true bundle exec rspec DEBUG_CONSUMER=true bundle exec rspec ``` -After running the tests, you can bring the cluster down to start with a clean slate: +After running the tests you can bring the cluster down to start with a +clean slate: ``` docker-compose down @@ -152,22 +104,9 @@ docker-compose down ## Example -To see everything working, run these in separate tabs: +To see everything working run these in separate tabs: ``` bundle exec rake consume_messages bundle exec rake produce_messages ``` - -## Versions - -| rdkafka-ruby | librdkafka | -|-|-| -| 0.17.0 (Unreleased) | 2.4.0 (2024-05-07) | -| 0.16.0 (2024-06-13) | 2.3.0 (2023-10-25) | -| 0.15.0 (2023-12-03) | 2.3.0 (2023-10-25) | -| 0.14.0 (2023-11-21) | 2.2.0 (2023-07-12) | -| 0.13.0 (2023-07-24) | 2.0.2 (2023-01-20) | -| 0.12.0 (2022-06-17) | 1.9.0 (2022-06-16) | -| 0.11.0 (2021-11-17) | 1.8.2 (2021-10-18) | -| 0.10.0 (2021-09-07) | 1.5.0 (2020-07-20) | diff --git a/Rakefile b/Rakefile index 3093b587..bb12c01f 100644 --- a/Rakefile +++ b/Rakefile @@ -1,5 +1,3 @@ -# frozen_string_literal: true - # Rakefile require 'bundler/gem_tasks' diff --git a/certs/cert_chain.pem b/certs/cert_chain.pem deleted file mode 100644 index 566c617f..00000000 --- a/certs/cert_chain.pem +++ /dev/null @@ -1,26 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIEcDCCAtigAwIBAgIBATANBgkqhkiG9w0BAQsFADA/MRAwDgYDVQQDDAdjb250 -YWN0MRcwFQYKCZImiZPyLGQBGRYHa2FyYWZrYTESMBAGCgmSJomT8ixkARkWAmlv -MB4XDTIzMDgyMTA3MjU1NFoXDTI0MDgyMDA3MjU1NFowPzEQMA4GA1UEAwwHY29u -dGFjdDEXMBUGCgmSJomT8ixkARkWB2thcmFma2ExEjAQBgoJkiaJk/IsZAEZFgJp -bzCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAOuZpyQKEwsTG9plLat7 -8bUaNuNBEnouTsNMr6X+XTgvyrAxTuocdsyP1sNCjdS1B8RiiDH1/Nt9qpvlBWon -sdJ1SYhaWNVfqiYStTDnCx3PRMmHRdD4KqUWKpN6VpZ1O/Zu+9Mw0COmvXgZuuO9 -wMSJkXRo6dTCfMedLAIxjMeBIxtoLR2e6Jm6MR8+8WYYVWrO9kSOOt5eKQLBY7aK -b/Dc40EcJKPg3Z30Pia1M9ZyRlb6SOj6SKpHRqc7vbVQxjEw6Jjal1lZ49m3YZMd -ArMAs9lQZNdSw5/UX6HWWURLowg6k10RnhTUtYyzO9BFev0JFJftHnmuk8vtb+SD -5VPmjFXg2VOcw0B7FtG75Vackk8QKfgVe3nSPhVpew2CSPlbJzH80wChbr19+e3+ -YGr1tOiaJrL6c+PNmb0F31NXMKpj/r+n15HwlTMRxQrzFcgjBlxf2XFGnPQXHhBm -kp1OFnEq4GG9sON4glRldkwzi/f/fGcZmo5fm3d+0ZdNgwIDAQABo3cwdTAJBgNV -HRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUPVH5+dLA80A1kJ2Uz5iGwfOa -1+swHQYDVR0RBBYwFIESY29udGFjdEBrYXJhZmthLmlvMB0GA1UdEgQWMBSBEmNv -bnRhY3RAa2FyYWZrYS5pbzANBgkqhkiG9w0BAQsFAAOCAYEAnpa0jcN7JzREHMTQ -bfZ+xcvlrzuROMY6A3zIZmQgbnoZZNuX4cMRrT1p1HuwXpxdpHPw7dDjYqWw3+1h -3mXLeMuk7amjQpYoSWU/OIZMhIsARra22UN8qkkUlUj3AwTaChVKN/bPJOM2DzfU -kz9vUgLeYYFfQbZqeI6SsM7ltilRV4W8D9yNUQQvOxCFxtLOetJ00fC/E7zMUzbK -IBwYFQYsbI6XQzgAIPW6nGSYKgRhkfpmquXSNKZRIQ4V6bFrufa+DzD0bt2ZA3ah -fMmJguyb5L2Gf1zpDXzFSPMG7YQFLzwYz1zZZvOU7/UCpQsHpID/YxqDp4+Dgb+Y -qma0whX8UG/gXFV2pYWpYOfpatvahwi+A1TwPQsuZwkkhi1OyF1At3RY+hjSXyav -AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s -msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc ------END CERTIFICATE----- diff --git a/dist/librdkafka-1.5.0.tar.gz b/dist/librdkafka-1.5.0.tar.gz new file mode 100644 index 00000000..603cfb9f Binary files /dev/null and b/dist/librdkafka-1.5.0.tar.gz differ diff --git a/dist/librdkafka_2.4.0.tar.gz b/dist/librdkafka_2.4.0.tar.gz deleted file mode 100644 index ed41a983..00000000 Binary files a/dist/librdkafka_2.4.0.tar.gz and /dev/null differ diff --git a/docker-compose.yml b/docker-compose.yml index da8306e0..35d1f28d 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,27 +1,24 @@ +--- + version: '2' services: - kafka: - container_name: kafka - image: confluentinc/cp-kafka:7.6.1 + zookeeper: + image: confluentinc/cp-zookeeper:latest + environment: + ZOOKEEPER_CLIENT_PORT: 2181 + ZOOKEEPER_TICK_TIME: 2000 + kafka: + image: confluentinc/cp-kafka:latest + depends_on: + - zookeeper ports: - 9092:9092 - environment: - CLUSTER_ID: kafka-docker-cluster-1 + KAFKA_BROKER_ID: 1 + KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 + KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 + KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 - KAFKA_PROCESS_ROLES: broker,controller - KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER - KAFKA_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093 - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT - KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092 - KAFKA_BROKER_ID: 1 - KAFKA_CONTROLLER_QUORUM_VOTERS: 1@127.0.0.1:9093 - ALLOW_PLAINTEXT_LISTENER: 'yes' - KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true' - KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 - KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 - KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true" - KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer diff --git a/ext/README.md b/ext/README.md index 7efed6fd..7c7397ba 100644 --- a/ext/README.md +++ b/ext/README.md @@ -1,11 +1,11 @@ # Ext -This gem depends on the `librdkafka` C library. It is downloaded when +This gem dependes on the `librdkafka` C library. It is downloaded when this gem is installed. To update the `librdkafka` version follow the following steps: -* Go to https://github.com/confluentinc/librdkafka/releases to get the new +* Go to https://github.com/edenhill/librdkafka/releases to get the new version number and asset checksum for `tar.gz`. * Change the version in `lib/rdkafka/version.rb` * Change the `sha256` in `lib/rdkafka/version.rb` diff --git a/ext/Rakefile b/ext/Rakefile index 58c1392b..dec0f164 100644 --- a/ext/Rakefile +++ b/ext/Rakefile @@ -1,67 +1,54 @@ -# frozen_string_literal: true - require File.expand_path('../../lib/rdkafka/version', __FILE__) +require "mini_portile2" require "fileutils" require "open-uri" task :default => :clean do - # For nix users, nix can't locate the file paths because the packages it's requiring aren't managed by the system but are - # managed by nix itself, so using the normal file paths doesn't work for nix users. - # - # Mini_portile causes an issue because it's dependencies are downloaded on the fly and therefore don't exist/aren't - # accessible in the nix environment - if ENV.fetch('RDKAFKA_EXT_PATH', '').empty? - # Download and compile librdkafka if RDKAFKA_EXT_PATH is not set - require "mini_portile2" - recipe = MiniPortile.new("librdkafka", Rdkafka::LIBRDKAFKA_VERSION) - - # Use default homebrew openssl if we're on mac and the directory exists - # and each of flags is not empty - if recipe.host&.include?("darwin") && system("which brew &> /dev/null") && Dir.exist?("#{homebrew_prefix = %x(brew --prefix openssl).strip}") - ENV["CPPFLAGS"] = "-I#{homebrew_prefix}/include" unless ENV["CPPFLAGS"] - ENV["LDFLAGS"] = "-L#{homebrew_prefix}/lib" unless ENV["LDFLAGS"] + # MiniPortile#download_file_http is a monkey patch that removes the download + # progress indicator. This indicator relies on the 'Content Length' response + # headers, which is not set by GitHub + class MiniPortile + def download_file_http(url, full_path, _count) + filename = File.basename(full_path) + with_tempfile(filename, full_path) do |temp_file| + params = { 'Accept-Encoding' => 'identity' } + OpenURI.open_uri(url, 'rb', params) do |io| + temp_file.write(io.read) + end + output + end end + end - releases = File.expand_path(File.join(File.dirname(__FILE__), '../dist')) - - recipe.files << { - :url => "file://#{releases}/librdkafka_#{Rdkafka::LIBRDKAFKA_VERSION}.tar.gz", - :sha256 => Rdkafka::LIBRDKAFKA_SOURCE_SHA256 - } - recipe.configure_options = ["--host=#{recipe.host}"] + # Download and compile librdkafka + recipe = MiniPortile.new("librdkafka", Rdkafka::LIBRDKAFKA_VERSION) - # Disable using libc regex engine in favor of the embedded one - # The default regex engine of librdkafka does not always work exactly as most of the users - # would expect, hence this flag allows for changing it to the other one - if ENV.key?('RDKAFKA_DISABLE_REGEX_EXT') - recipe.configure_options << '--disable-regex-ext' - end + # Use default homebrew openssl if we're on mac and the directory exists + # and each of flags is not empty + if recipe.host&.include?("darwin") && Dir.exist?("/usr/local/opt/openssl") + ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include" unless ENV["CPPFLAGS"] + ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib" unless ENV["LDFLAGS"] + end - recipe.cook - # Move dynamic library we're interested in - if recipe.host.include?('darwin') - from_extension = '1.dylib' - to_extension = 'dylib' - else - from_extension = 'so.1' - to_extension = 'so' - end - lib_path = File.join(File.dirname(__FILE__), "ports/#{recipe.host}/librdkafka/#{Rdkafka::LIBRDKAFKA_VERSION}/lib/librdkafka.#{from_extension}") - FileUtils.mv(lib_path, File.join(File.dirname(__FILE__), "librdkafka.#{to_extension}")) - # Cleanup files created by miniportile we don't need in the gem - FileUtils.rm_rf File.join(File.dirname(__FILE__), "tmp") - FileUtils.rm_rf File.join(File.dirname(__FILE__), "ports") + recipe.files << { + :url => "https://codeload.github.com/edenhill/librdkafka/tar.gz/v#{Rdkafka::LIBRDKAFKA_VERSION}", + :sha256 => Rdkafka::LIBRDKAFKA_SOURCE_SHA256 + } + recipe.configure_options = ["--host=#{recipe.host}"] + recipe.cook + # Move dynamic library we're interested in + if recipe.host.include?('darwin') + from_extension = '1.dylib' + to_extension = 'dylib' else - # Otherwise, copy existing libraries to ./ext - if ENV['RDKAFKA_EXT_PATH'].nil? || ENV['RDKAFKA_EXT_PATH'].empty? - raise "RDKAFKA_EXT_PATH must be set in your nix config when running under nix" - end - files = [ - File.join(ENV['RDKAFKA_EXT_PATH'], 'lib', 'librdkafka.dylib'), - File.join(ENV['RDKAFKA_EXT_PATH'], 'lib', 'librdkafka.so') - ] - files.each { |ext| FileUtils.cp(ext, File.dirname(__FILE__)) if File.exist?(ext) } + from_extension = 'so.1' + to_extension = 'so' end + lib_path = File.join(File.dirname(__FILE__), "ports/#{recipe.host}/librdkafka/#{Rdkafka::LIBRDKAFKA_VERSION}/lib/librdkafka.#{from_extension}") + FileUtils.mv(lib_path, File.join(File.dirname(__FILE__), "librdkafka.#{to_extension}")) + # Cleanup files created by miniportile we don't need in the gem + FileUtils.rm_rf File.join(File.dirname(__FILE__), "tmp") + FileUtils.rm_rf File.join(File.dirname(__FILE__), "ports") end task :clean do diff --git a/lib/rdkafka.rb b/lib/rdkafka.rb index aff015df..155d45d9 100644 --- a/lib/rdkafka.rb +++ b/lib/rdkafka.rb @@ -1,36 +1,11 @@ -# frozen_string_literal: true - -require "logger" -require "objspace" -require "ffi" -require "json" - require "rdkafka/version" -require "rdkafka/helpers/time" -require "rdkafka/helpers/oauth" + require "rdkafka/abstract_handle" require "rdkafka/admin" require "rdkafka/admin/create_topic_handle" require "rdkafka/admin/create_topic_report" -require "rdkafka/admin/delete_groups_handle" -require "rdkafka/admin/delete_groups_report" require "rdkafka/admin/delete_topic_handle" require "rdkafka/admin/delete_topic_report" -require "rdkafka/admin/create_partitions_handle" -require "rdkafka/admin/create_partitions_report" -require "rdkafka/admin/create_acl_handle" -require "rdkafka/admin/create_acl_report" -require "rdkafka/admin/delete_acl_handle" -require "rdkafka/admin/delete_acl_report" -require "rdkafka/admin/describe_acl_handle" -require "rdkafka/admin/describe_acl_report" -require "rdkafka/admin/describe_configs_handle" -require "rdkafka/admin/describe_configs_report" -require "rdkafka/admin/incremental_alter_configs_handle" -require "rdkafka/admin/incremental_alter_configs_report" -require "rdkafka/admin/acl_binding_result" -require "rdkafka/admin/config_binding_result" -require "rdkafka/admin/config_resource_binding_result" require "rdkafka/bindings" require "rdkafka/callbacks" require "rdkafka/config" @@ -41,11 +16,6 @@ require "rdkafka/consumer/topic_partition_list" require "rdkafka/error" require "rdkafka/metadata" -require "rdkafka/native_kafka" require "rdkafka/producer" require "rdkafka/producer/delivery_handle" require "rdkafka/producer/delivery_report" - -# Main Rdkafka namespace of this gem -module Rdkafka -end diff --git a/lib/rdkafka/abstract_handle.rb b/lib/rdkafka/abstract_handle.rb index c1db090f..7407af05 100644 --- a/lib/rdkafka/abstract_handle.rb +++ b/lib/rdkafka/abstract_handle.rb @@ -1,49 +1,25 @@ -# frozen_string_literal: true +require "ffi" module Rdkafka - # This class serves as an abstract base class to represent handles within the Rdkafka module. - # As a subclass of `FFI::Struct`, this class provides a blueprint for other specific handle - # classes to inherit from, ensuring they adhere to a particular structure and behavior. - # - # Subclasses must define their own layout, and the layout must start with: - # - # layout :pending, :bool, - # :response, :int class AbstractHandle < FFI::Struct - include Helpers::Time + # Subclasses must define their own layout, and the layout must start with: + # + # layout :pending, :bool, + # :response, :int - # Registry for registering all the handles. REGISTRY = {} - # Default wait timeout is 31 years - MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000 - # Deprecation message for wait_timeout argument in wait method - WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \ - "We don't rely on it's value anymore. Please refactor your code to remove references to it." - private_constant :MAX_WAIT_TIMEOUT_FOREVER + CURRENT_TIME = -> { Process.clock_gettime(Process::CLOCK_MONOTONIC) }.freeze - class << self - # Adds handle to the register - # - # @param handle [AbstractHandle] any handle we want to register - def register(handle) - address = handle.to_ptr.address - REGISTRY[address] = handle - end + private_constant :CURRENT_TIME - # Removes handle from the register based on the handle address - # - # @param address [Integer] address of the registered handle we want to remove - def remove(address) - REGISTRY.delete(address) - end + def self.register(handle) + address = handle.to_ptr.address + REGISTRY[address] = handle end - def initialize - @mutex = Thread::Mutex.new - @resource = Thread::ConditionVariable.new - - super + def self.remove(address) + REGISTRY.delete(address) end # Whether the handle is still pending. @@ -54,52 +30,36 @@ def pending? end # Wait for the operation to complete or raise an error if this takes longer than the timeout. - # If there is a timeout this does not mean the operation failed, rdkafka might still be working - # on the operation. In this case it is possible to call wait again. + # If there is a timeout this does not mean the operation failed, rdkafka might still be working on the operation. + # In this case it is possible to call wait again. # - # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out. - # If this is nil we will wait forever - # @param wait_timeout [nil] deprecated - # @param raise_response_error [Boolean] should we raise error when waiting finishes - # - # @return [Object] Operation-specific result + # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out. If this is nil it does not time out. + # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the operation has completed # # @raise [RdkafkaError] When the operation failed # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending - def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true) - Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil? - - timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER - - @mutex.synchronize do - loop do - if pending? - to_wait = (timeout - monotonic_now) - - if to_wait.positive? - @resource.wait(@mutex, to_wait) - else - raise WaitTimeoutError.new( - "Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds" - ) - end - elsif self[:response] != 0 && raise_response_error - raise_error - else - return create_result + # + # @return [Object] Operation-specific result + def wait(max_wait_timeout: 60, wait_timeout: 0.1) + timeout = if max_wait_timeout + CURRENT_TIME.call + max_wait_timeout + else + nil + end + loop do + if pending? + if timeout && timeout <= CURRENT_TIME.call + raise WaitTimeoutError.new("Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds") end + sleep wait_timeout + elsif self[:response] != 0 + raise_error + else + return create_result end end end - # Unlock the resources - def unlock - @mutex.synchronize do - self[:pending] = false - @resource.broadcast - end - end - # @return [String] the name of the operation (e.g. "delivery") def operation_name raise "Must be implemented by subclass!" diff --git a/lib/rdkafka/admin.rb b/lib/rdkafka/admin.rb index bd18a4ae..f25184a8 100644 --- a/lib/rdkafka/admin.rb +++ b/lib/rdkafka/admin.rb @@ -1,113 +1,43 @@ -# frozen_string_literal: true - module Rdkafka class Admin - include Helpers::OAuth - - class << self - # Allows us to retrieve librdkafka errors with descriptions - # Useful for debugging and building UIs, etc. - # - # @return [Hash] hash with errors mapped by code - def describe_errors - # Memory pointers for the array of structures and count - p_error_descs = FFI::MemoryPointer.new(:pointer) - p_count = FFI::MemoryPointer.new(:size_t) - - # Call the attached function - Bindings.rd_kafka_get_err_descs(p_error_descs, p_count) - - # Retrieve the number of items in the array - count = p_count.read_uint - - # Get the pointer to the array of error descriptions - array_of_errors = FFI::Pointer.new(Bindings::NativeErrorDesc, p_error_descs.read_pointer) - - errors = {} - - count.times do |i| - # Get the pointer to each struct - error_ptr = array_of_errors[i] - - # Create a new instance of NativeErrorDesc for each item - error_desc = Bindings::NativeErrorDesc.new(error_ptr) - - # Read values from the struct - code = error_desc[:code] - - name = '' - desc = '' - - name = error_desc[:name].read_string unless error_desc[:name].null? - desc = error_desc[:desc].read_string unless error_desc[:desc].null? - - errors[code] = { code: code, name: name, description: desc } - end - - errors - end - end - # @private def initialize(native_kafka) @native_kafka = native_kafka - - # Makes sure, that native kafka gets closed before it gets GCed by Ruby - ObjectSpace.define_finalizer(self, native_kafka.finalizer) - end - - # Starts the native Kafka polling thread and kicks off the init polling - # @note Not needed to run unless explicit start was disabled - def start - @native_kafka.start - end - - # @return [String] admin name - def name - @name ||= @native_kafka.with_inner do |inner| - ::Rdkafka::Bindings.rd_kafka_name(inner) - end - end - - def finalizer - ->(_) { close } - end - - # Performs the metadata request using admin - # - # @param topic_name [String, nil] metadat about particular topic or all if nil - # @param timeout_ms [Integer] metadata request timeout - # @return [Metadata] requested metadata - def metadata(topic_name = nil, timeout_ms = 2_000) - closed_admin_check(__method__) - - @native_kafka.with_inner do |inner| - Metadata.new(inner, topic_name, timeout_ms) + @closing = false + + # Start thread to poll client for callbacks + @polling_thread = Thread.new do + loop do + Rdkafka::Bindings.rd_kafka_poll(@native_kafka, 250) + # Exit thread if closing and the poll queue is empty + if @closing && Rdkafka::Bindings.rd_kafka_outq_len(@native_kafka) == 0 + break + end + end end + @polling_thread.abort_on_exception = true end # Close this admin instance def close - return if closed? - ObjectSpace.undefine_finalizer(self) - @native_kafka.close - end + return unless @native_kafka - # Whether this admin has closed - def closed? - @native_kafka.closed? + # Indicate to polling thread that we're closing + @closing = true + # Wait for the polling thread to finish up + @polling_thread.join + Rdkafka::Bindings.rd_kafka_destroy(@native_kafka) + @native_kafka = nil end # Create a topic with the given partition count and replication factor # - # @return [CreateTopicHandle] Create topic handle that can be used to wait for the result of - # creating the topic - # # @raise [ConfigError] When the partition count or replication factor are out of valid range # @raise [RdkafkaError] When the topic name is invalid or the topic already exists # @raise [RdkafkaError] When the topic configuration is invalid + # + # @return [CreateTopicHandle] Create topic handle that can be used to wait for the result of creating the topic def create_topic(topic_name, partition_count, replication_factor, topic_config={}) - closed_admin_check(__method__) # Create a rd_kafka_NewTopic_t representing the new topic error_buffer = FFI::MemoryPointer.from_string(" " * 256) @@ -138,9 +68,7 @@ def create_topic(topic_name, partition_count, replication_factor, topic_config={ topics_array_ptr.write_array_of_pointer(pointer_array) # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end + queue_ptr = Rdkafka::Bindings.rd_kafka_queue_get_background(@native_kafka) if queue_ptr.null? Rdkafka::Bindings.rd_kafka_NewTopic_destroy(new_topic_ptr) raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") @@ -151,22 +79,18 @@ def create_topic(topic_name, partition_count, replication_factor, topic_config={ create_topic_handle[:pending] = true create_topic_handle[:response] = -1 CreateTopicHandle.register(create_topic_handle) - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATETOPICS) - end + admin_options_ptr = Rdkafka::Bindings.rd_kafka_AdminOptions_new(@native_kafka, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATETOPICS) Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, create_topic_handle.to_ptr) begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_CreateTopics( - inner, + Rdkafka::Bindings.rd_kafka_CreateTopics( + @native_kafka, topics_array_ptr, 1, admin_options_ptr, queue_ptr - ) - end - rescue Exception + ) + rescue Exception => err CreateTopicHandle.remove(create_topic_handle.to_ptr.address) raise ensure @@ -178,66 +102,12 @@ def create_topic(topic_name, partition_count, replication_factor, topic_config={ create_topic_handle end - def delete_group(group_id) - closed_admin_check(__method__) - - # Create a rd_kafka_DeleteGroup_t representing the new topic - delete_groups_ptr = Rdkafka::Bindings.rd_kafka_DeleteGroup_new( - FFI::MemoryPointer.from_string(group_id) - ) - - pointer_array = [delete_groups_ptr] - groups_array_ptr = FFI::MemoryPointer.new(:pointer) - groups_array_ptr.write_array_of_pointer(pointer_array) - - # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - if queue_ptr.null? - Rdkafka::Bindings.rd_kafka_DeleteTopic_destroy(delete_topic_ptr) - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - # Create and register the handle we will return to the caller - delete_groups_handle = DeleteGroupsHandle.new - delete_groups_handle[:pending] = true - delete_groups_handle[:response] = -1 - DeleteGroupsHandle.register(delete_groups_handle) - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETETOPICS) - end - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, delete_groups_handle.to_ptr) - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_DeleteGroups( - inner, - groups_array_ptr, - 1, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - DeleteGroupsHandle.remove(delete_groups_handle.to_ptr.address) - raise - ensure - Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr) - Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr) - Rdkafka::Bindings.rd_kafka_DeleteGroup_destroy(delete_groups_ptr) - end - - delete_groups_handle - end - - # Deletes the named topic + # Delete the named topic # - # @return [DeleteTopicHandle] Delete topic handle that can be used to wait for the result of - # deleting the topic # @raise [RdkafkaError] When the topic name is invalid or the topic does not exist + # + # @return [DeleteTopicHandle] Delete topic handle that can be used to wait for the result of deleting the topic def delete_topic(topic_name) - closed_admin_check(__method__) # Create a rd_kafka_DeleteTopic_t representing the topic to be deleted delete_topic_ptr = Rdkafka::Bindings.rd_kafka_DeleteTopic_new(FFI::MemoryPointer.from_string(topic_name)) @@ -248,9 +118,7 @@ def delete_topic(topic_name) topics_array_ptr.write_array_of_pointer(pointer_array) # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end + queue_ptr = Rdkafka::Bindings.rd_kafka_queue_get_background(@native_kafka) if queue_ptr.null? Rdkafka::Bindings.rd_kafka_DeleteTopic_destroy(delete_topic_ptr) raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") @@ -261,22 +129,18 @@ def delete_topic(topic_name) delete_topic_handle[:pending] = true delete_topic_handle[:response] = -1 DeleteTopicHandle.register(delete_topic_handle) - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETETOPICS) - end + admin_options_ptr = Rdkafka::Bindings.rd_kafka_AdminOptions_new(@native_kafka, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETETOPICS) Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, delete_topic_handle.to_ptr) begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_DeleteTopics( - inner, + Rdkafka::Bindings.rd_kafka_DeleteTopics( + @native_kafka, topics_array_ptr, 1, admin_options_ptr, queue_ptr - ) - end - rescue Exception + ) + rescue Exception => err DeleteTopicHandle.remove(delete_topic_handle.to_ptr.address) raise ensure @@ -287,547 +151,5 @@ def delete_topic(topic_name) delete_topic_handle end - - # Creates extra partitions for a given topic - # - # @param topic_name [String] - # @param partition_count [Integer] how many partitions we want to end up with for given topic - # - # @raise [ConfigError] When the partition count or replication factor are out of valid range - # @raise [RdkafkaError] When the topic name is invalid or the topic already exists - # @raise [RdkafkaError] When the topic configuration is invalid - # - # @return [CreateTopicHandle] Create topic handle that can be used to wait for the result of creating the topic - def create_partitions(topic_name, partition_count) - closed_admin_check(__method__) - - @native_kafka.with_inner do |inner| - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - new_partitions_ptr = Rdkafka::Bindings.rd_kafka_NewPartitions_new( - FFI::MemoryPointer.from_string(topic_name), - partition_count, - error_buffer, - 256 - ) - if new_partitions_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - - pointer_array = [new_partitions_ptr] - topics_array_ptr = FFI::MemoryPointer.new(:pointer) - topics_array_ptr.write_array_of_pointer(pointer_array) - - # Get a pointer to the queue that our request will be enqueued on - queue_ptr = Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - if queue_ptr.null? - Rdkafka::Bindings.rd_kafka_NewPartitions_destroy(new_partitions_ptr) - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - # Create and register the handle we will return to the caller - create_partitions_handle = CreatePartitionsHandle.new - create_partitions_handle[:pending] = true - create_partitions_handle[:response] = -1 - CreatePartitionsHandle.register(create_partitions_handle) - admin_options_ptr = Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATEPARTITIONS) - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, create_partitions_handle.to_ptr) - - begin - Rdkafka::Bindings.rd_kafka_CreatePartitions( - inner, - topics_array_ptr, - 1, - admin_options_ptr, - queue_ptr - ) - rescue Exception - CreatePartitionsHandle.remove(create_partitions_handle.to_ptr.address) - raise - ensure - Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr) - Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr) - Rdkafka::Bindings.rd_kafka_NewPartitions_destroy(new_partitions_ptr) - end - - create_partitions_handle - end - end - - # Create acl - # @param resource_type - values of type rd_kafka_ResourceType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7307 - # valid values are: - # RD_KAFKA_RESOURCE_TOPIC = 2 - # RD_KAFKA_RESOURCE_GROUP = 3 - # RD_KAFKA_RESOURCE_BROKER = 4 - # @param resource_pattern_type - values of type rd_kafka_ResourcePatternType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7320 - # valid values are: - # RD_KAFKA_RESOURCE_PATTERN_MATCH = 2 - # RD_KAFKA_RESOURCE_PATTERN_LITERAL = 3 - # RD_KAFKA_RESOURCE_PATTERN_PREFIXED = 4 - # @param operation - values of type rd_kafka_AclOperation_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8403 - # valid values are: - # RD_KAFKA_ACL_OPERATION_ALL = 2 - # RD_KAFKA_ACL_OPERATION_READ = 3 - # RD_KAFKA_ACL_OPERATION_WRITE = 4 - # RD_KAFKA_ACL_OPERATION_CREATE = 5 - # RD_KAFKA_ACL_OPERATION_DELETE = 6 - # RD_KAFKA_ACL_OPERATION_ALTER = 7 - # RD_KAFKA_ACL_OPERATION_DESCRIBE = 8 - # RD_KAFKA_ACL_OPERATION_CLUSTER_ACTION = 9 - # RD_KAFKA_ACL_OPERATION_DESCRIBE_CONFIGS = 10 - # RD_KAFKA_ACL_OPERATION_ALTER_CONFIGS = 11 - # RD_KAFKA_ACL_OPERATION_IDEMPOTENT_WRITE = 12 - # @param permission_type - values of type rd_kafka_AclPermissionType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8435 - # valid values are: - # RD_KAFKA_ACL_PERMISSION_TYPE_DENY = 2 - # RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW = 3 - # - # @return [CreateAclHandle] Create acl handle that can be used to wait for the result of creating the acl - # - # @raise [RdkafkaError] - def create_acl(resource_type:, resource_name:, resource_pattern_type:, principal:, host:, operation:, permission_type:) - closed_admin_check(__method__) - - # Create a rd_kafka_AclBinding_t representing the new acl - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - new_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBinding_new( - resource_type, - FFI::MemoryPointer.from_string(resource_name), - resource_pattern_type, - FFI::MemoryPointer.from_string(principal), - FFI::MemoryPointer.from_string(host), - operation, - permission_type, - error_buffer, - 256 - ) - if new_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - - # Note that rd_kafka_CreateAcls can create more than one acl at a time - pointer_array = [new_acl_ptr] - acls_array_ptr = FFI::MemoryPointer.new(:pointer) - acls_array_ptr.write_array_of_pointer(pointer_array) - - # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - - if queue_ptr.null? - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(new_acl_ptr) - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - # Create and register the handle that we will return to the caller - create_acl_handle = CreateAclHandle.new - create_acl_handle[:pending] = true - create_acl_handle[:response] = -1 - CreateAclHandle.register(create_acl_handle) - - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATEACLS) - end - - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, create_acl_handle.to_ptr) - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_CreateAcls( - inner, - acls_array_ptr, - 1, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - CreateAclHandle.remove(create_acl_handle.to_ptr.address) - raise - ensure - Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr) - Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr) - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(new_acl_ptr) - end - - create_acl_handle - end - - # Delete acl - # - # @param resource_type - values of type rd_kafka_ResourceType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7307 - # valid values are: - # RD_KAFKA_RESOURCE_TOPIC = 2 - # RD_KAFKA_RESOURCE_GROUP = 3 - # RD_KAFKA_RESOURCE_BROKER = 4 - # @param resource_pattern_type - values of type rd_kafka_ResourcePatternType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7320 - # valid values are: - # RD_KAFKA_RESOURCE_PATTERN_MATCH = 2 - # RD_KAFKA_RESOURCE_PATTERN_LITERAL = 3 - # RD_KAFKA_RESOURCE_PATTERN_PREFIXED = 4 - # @param operation - values of type rd_kafka_AclOperation_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8403 - # valid values are: - # RD_KAFKA_ACL_OPERATION_ALL = 2 - # RD_KAFKA_ACL_OPERATION_READ = 3 - # RD_KAFKA_ACL_OPERATION_WRITE = 4 - # RD_KAFKA_ACL_OPERATION_CREATE = 5 - # RD_KAFKA_ACL_OPERATION_DELETE = 6 - # RD_KAFKA_ACL_OPERATION_ALTER = 7 - # RD_KAFKA_ACL_OPERATION_DESCRIBE = 8 - # RD_KAFKA_ACL_OPERATION_CLUSTER_ACTION = 9 - # RD_KAFKA_ACL_OPERATION_DESCRIBE_CONFIGS = 10 - # RD_KAFKA_ACL_OPERATION_ALTER_CONFIGS = 11 - # RD_KAFKA_ACL_OPERATION_IDEMPOTENT_WRITE = 12 - # @param permission_type - values of type rd_kafka_AclPermissionType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8435 - # valid values are: - # RD_KAFKA_ACL_PERMISSION_TYPE_DENY = 2 - # RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW = 3 - # @return [DeleteAclHandle] Delete acl handle that can be used to wait for the result of deleting the acl - # - # @raise [RdkafkaError] - def delete_acl(resource_type:, resource_name:, resource_pattern_type:, principal:, host:, operation:, permission_type:) - closed_admin_check(__method__) - - # Create a rd_kafka_AclBinding_t representing the acl to be deleted - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - - delete_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBindingFilter_new( - resource_type, - resource_name ? FFI::MemoryPointer.from_string(resource_name) : nil, - resource_pattern_type, - principal ? FFI::MemoryPointer.from_string(principal) : nil, - host ? FFI::MemoryPointer.from_string(host) : nil, - operation, - permission_type, - error_buffer, - 256 - ) - - if delete_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - - # Note that rd_kafka_DeleteAcls can delete more than one acl at a time - pointer_array = [delete_acl_ptr] - acls_array_ptr = FFI::MemoryPointer.new(:pointer) - acls_array_ptr.write_array_of_pointer(pointer_array) - - # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - - if queue_ptr.null? - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(new_acl_ptr) - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - # Create and register the handle that we will return to the caller - delete_acl_handle = DeleteAclHandle.new - delete_acl_handle[:pending] = true - delete_acl_handle[:response] = -1 - DeleteAclHandle.register(delete_acl_handle) - - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETEACLS) - end - - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, delete_acl_handle.to_ptr) - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_DeleteAcls( - inner, - acls_array_ptr, - 1, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - DeleteAclHandle.remove(delete_acl_handle.to_ptr.address) - raise - ensure - Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr) - Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr) - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(delete_acl_ptr) - end - - delete_acl_handle - end - - # Describe acls - # - # @param resource_type - values of type rd_kafka_ResourceType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7307 - # valid values are: - # RD_KAFKA_RESOURCE_TOPIC = 2 - # RD_KAFKA_RESOURCE_GROUP = 3 - # RD_KAFKA_RESOURCE_BROKER = 4 - # @param resource_pattern_type - values of type rd_kafka_ResourcePatternType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7320 - # valid values are: - # RD_KAFKA_RESOURCE_PATTERN_MATCH = 2 - # RD_KAFKA_RESOURCE_PATTERN_LITERAL = 3 - # RD_KAFKA_RESOURCE_PATTERN_PREFIXED = 4 - # @param operation - values of type rd_kafka_AclOperation_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8403 - # valid values are: - # RD_KAFKA_ACL_OPERATION_ALL = 2 - # RD_KAFKA_ACL_OPERATION_READ = 3 - # RD_KAFKA_ACL_OPERATION_WRITE = 4 - # RD_KAFKA_ACL_OPERATION_CREATE = 5 - # RD_KAFKA_ACL_OPERATION_DELETE = 6 - # RD_KAFKA_ACL_OPERATION_ALTER = 7 - # RD_KAFKA_ACL_OPERATION_DESCRIBE = 8 - # RD_KAFKA_ACL_OPERATION_CLUSTER_ACTION = 9 - # RD_KAFKA_ACL_OPERATION_DESCRIBE_CONFIGS = 10 - # RD_KAFKA_ACL_OPERATION_ALTER_CONFIGS = 11 - # RD_KAFKA_ACL_OPERATION_IDEMPOTENT_WRITE = 12 - # @param permission_type - values of type rd_kafka_AclPermissionType_t - # https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8435 - # valid values are: - # RD_KAFKA_ACL_PERMISSION_TYPE_DENY = 2 - # RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW = 3 - # @return [DescribeAclHandle] Describe acl handle that can be used to wait for the result of fetching acls - # - # @raise [RdkafkaError] - def describe_acl(resource_type:, resource_name:, resource_pattern_type:, principal:, host:, operation:, permission_type:) - closed_admin_check(__method__) - - # Create a rd_kafka_AclBinding_t with the filters to fetch existing acls - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - describe_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBindingFilter_new( - resource_type, - resource_name ? FFI::MemoryPointer.from_string(resource_name) : nil, - resource_pattern_type, - principal ? FFI::MemoryPointer.from_string(principal) : nil, - host ? FFI::MemoryPointer.from_string(host) : nil, - operation, - permission_type, - error_buffer, - 256 - ) - if describe_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - - # Get a pointer to the queue that our request will be enqueued on - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - - if queue_ptr.null? - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(new_acl_ptr) - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - # Create and register the handle that we will return to the caller - describe_acl_handle = DescribeAclHandle.new - describe_acl_handle[:pending] = true - describe_acl_handle[:response] = -1 - DescribeAclHandle.register(describe_acl_handle) - - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new(inner, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DESCRIBEACLS) - end - - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, describe_acl_handle.to_ptr) - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_DescribeAcls( - inner, - describe_acl_ptr, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - DescribeAclHandle.remove(describe_acl_handle.to_ptr.address) - raise - ensure - Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr) - Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr) - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(describe_acl_ptr) - end - - describe_acl_handle - end - - - # Describe configs - # - # @param resources [Array] Array where elements are hashes with two keys: - # - `:resource_type` - numerical resource type based on Kafka API - # - `:resource_name` - string with resource name - # @return [DescribeConfigsHandle] Describe config handle that can be used to wait for the - # result of fetching resources with their appropriate configs - # - # @raise [RdkafkaError] - # - # @note Several resources can be requested at one go, but only one broker at a time - def describe_configs(resources) - closed_admin_check(__method__) - - handle = DescribeConfigsHandle.new - handle[:pending] = true - handle[:response] = -1 - - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - - if queue_ptr.null? - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new( - inner, - Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS - ) - end - - DescribeConfigsHandle.register(handle) - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr) - - pointer_array = resources.map do |resource_details| - Rdkafka::Bindings.rd_kafka_ConfigResource_new( - resource_details.fetch(:resource_type), - FFI::MemoryPointer.from_string( - resource_details.fetch(:resource_name) - ) - ) - end - - configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size) - configs_array_ptr.write_array_of_pointer(pointer_array) - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_DescribeConfigs( - inner, - configs_array_ptr, - pointer_array.size, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - DescribeConfigsHandle.remove(handle.to_ptr.address) - - raise - ensure - Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array( - configs_array_ptr, - pointer_array.size - ) if configs_array_ptr - end - - handle - end - - # Alters in an incremental way all the configs provided for given resources - # - # @param resources_with_configs [Array] resources with the configs key that contains - # name, value and the proper op_type to perform on this value. - # - # @return [IncrementalAlterConfigsHandle] Incremental alter configs handle that can be used to - # wait for the result of altering resources with their appropriate configs - # - # @raise [RdkafkaError] - # - # @note Several resources can be requested at one go, but only one broker at a time - # @note The results won't contain altered values but only the altered resources - def incremental_alter_configs(resources_with_configs) - closed_admin_check(__method__) - - handle = IncrementalAlterConfigsHandle.new - handle[:pending] = true - handle[:response] = -1 - - queue_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_queue_get_background(inner) - end - - if queue_ptr.null? - raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL") - end - - admin_options_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_AdminOptions_new( - inner, - Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS - ) - end - - IncrementalAlterConfigsHandle.register(handle) - Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, handle.to_ptr) - - # Tu poprawnie tworzyc - pointer_array = resources_with_configs.map do |resource_details| - # First build the appropriate resource representation - resource_ptr = Rdkafka::Bindings.rd_kafka_ConfigResource_new( - resource_details.fetch(:resource_type), - FFI::MemoryPointer.from_string( - resource_details.fetch(:resource_name) - ) - ) - - resource_details.fetch(:configs).each do |config| - Bindings.rd_kafka_ConfigResource_add_incremental_config( - resource_ptr, - config.fetch(:name), - config.fetch(:op_type), - config.fetch(:value) - ) - end - - resource_ptr - end - - configs_array_ptr = FFI::MemoryPointer.new(:pointer, pointer_array.size) - configs_array_ptr.write_array_of_pointer(pointer_array) - - - begin - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_IncrementalAlterConfigs( - inner, - configs_array_ptr, - pointer_array.size, - admin_options_ptr, - queue_ptr - ) - end - rescue Exception - IncrementalAlterConfigsHandle.remove(handle.to_ptr.address) - - raise - ensure - Rdkafka::Bindings.rd_kafka_ConfigResource_destroy_array( - configs_array_ptr, - pointer_array.size - ) if configs_array_ptr - end - - handle - end - - private - - def closed_admin_check(method) - raise Rdkafka::ClosedAdminError.new(method) if closed? - end end end diff --git a/lib/rdkafka/admin/acl_binding_result.rb b/lib/rdkafka/admin/acl_binding_result.rb deleted file mode 100644 index 4a978cc0..00000000 --- a/lib/rdkafka/admin/acl_binding_result.rb +++ /dev/null @@ -1,51 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - # Extracts attributes of rd_kafka_AclBinding_t - # - class AclBindingResult - attr_reader :result_error, :error_string, :matching_acl_resource_type, - :matching_acl_resource_name, :matching_acl_resource_pattern_type, - :matching_acl_principal, :matching_acl_host, :matching_acl_operation, - :matching_acl_permission_type - - # This attribute was initially released under the name that is now an alias - # We keep it for backwards compatibility but it was changed for the consistency - alias matching_acl_pattern_type matching_acl_resource_pattern_type - - def initialize(matching_acl) - rd_kafka_error_pointer = Rdkafka::Bindings.rd_kafka_AclBinding_error(matching_acl) - @result_error = Rdkafka::Bindings.rd_kafka_error_code(rd_kafka_error_pointer) - error_string = Rdkafka::Bindings.rd_kafka_error_string(rd_kafka_error_pointer) - - if error_string != FFI::Pointer::NULL - @error_string = error_string.read_string - end - - @matching_acl_resource_type = Rdkafka::Bindings.rd_kafka_AclBinding_restype(matching_acl) - matching_acl_resource_name = Rdkafka::Bindings.rd_kafka_AclBinding_name(matching_acl) - - if matching_acl_resource_name != FFI::Pointer::NULL - @matching_acl_resource_name = matching_acl_resource_name.read_string - end - - @matching_acl_resource_pattern_type = Rdkafka::Bindings.rd_kafka_AclBinding_resource_pattern_type(matching_acl) - matching_acl_principal = Rdkafka::Bindings.rd_kafka_AclBinding_principal(matching_acl) - - if matching_acl_principal != FFI::Pointer::NULL - @matching_acl_principal = matching_acl_principal.read_string - end - - matching_acl_host = Rdkafka::Bindings.rd_kafka_AclBinding_host(matching_acl) - - if matching_acl_host != FFI::Pointer::NULL - @matching_acl_host = matching_acl_host.read_string - end - - @matching_acl_operation = Rdkafka::Bindings.rd_kafka_AclBinding_operation(matching_acl) - @matching_acl_permission_type = Rdkafka::Bindings.rd_kafka_AclBinding_permission_type(matching_acl) - end - end - end -end diff --git a/lib/rdkafka/admin/config_binding_result.rb b/lib/rdkafka/admin/config_binding_result.rb deleted file mode 100644 index 4080c9e5..00000000 --- a/lib/rdkafka/admin/config_binding_result.rb +++ /dev/null @@ -1,30 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - # A single config binding result that represents its values extracted from C - class ConfigBindingResult - attr_reader :name, :value, :read_only, :default, :sensitive, :synonym, :synonyms - - # @param config_ptr [FFI::Pointer] config pointer - def initialize(config_ptr) - @name = Bindings.rd_kafka_ConfigEntry_name(config_ptr) - @value = Bindings.rd_kafka_ConfigEntry_value(config_ptr) - @read_only = Bindings.rd_kafka_ConfigEntry_is_read_only(config_ptr) - @default = Bindings.rd_kafka_ConfigEntry_is_default(config_ptr) - @sensitive = Bindings.rd_kafka_ConfigEntry_is_sensitive(config_ptr) - @synonym = Bindings.rd_kafka_ConfigEntry_is_synonym(config_ptr) - @synonyms = [] - - # The code below builds up the config synonyms using same config binding - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - synonym_ptr = Bindings.rd_kafka_ConfigEntry_synonyms(config_ptr, pointer_to_size_t) - synonyms_ptr = synonym_ptr.read_array_of_pointer(pointer_to_size_t.read_int) - - (1..pointer_to_size_t.read_int).map do |ar| - self.class.new synonyms_ptr[ar - 1] - end - end - end - end -end diff --git a/lib/rdkafka/admin/config_resource_binding_result.rb b/lib/rdkafka/admin/config_resource_binding_result.rb deleted file mode 100644 index 0be030fa..00000000 --- a/lib/rdkafka/admin/config_resource_binding_result.rb +++ /dev/null @@ -1,18 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - # A simple binding that represents the requested config resource - class ConfigResourceBindingResult - attr_reader :name, :type, :configs, :configs_count - - def initialize(config_resource_ptr) - ffi_binding = Bindings::ConfigResource.new(config_resource_ptr) - - @name = ffi_binding[:name] - @type = ffi_binding[:type] - @configs = [] - end - end - end -end diff --git a/lib/rdkafka/admin/create_acl_handle.rb b/lib/rdkafka/admin/create_acl_handle.rb deleted file mode 100644 index 58e16b8c..00000000 --- a/lib/rdkafka/admin/create_acl_handle.rb +++ /dev/null @@ -1,28 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class CreateAclHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :response_string, :pointer - - # @return [String] the name of the operation - def operation_name - "create acl" - end - - # @return [CreateAclReport] instance with rdkafka_response value as 0 and rdkafka_response_string value as empty string if the acl creation was successful - def create_result - CreateAclReport.new(rdkafka_response: self[:response], rdkafka_response_string: self[:response_string]) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: self[:response_string].read_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/create_acl_report.rb b/lib/rdkafka/admin/create_acl_report.rb deleted file mode 100644 index bcd63d3b..00000000 --- a/lib/rdkafka/admin/create_acl_report.rb +++ /dev/null @@ -1,24 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class CreateAclReport - - # Upon successful creation of Acl RD_KAFKA_RESP_ERR_NO_ERROR - 0 is returned as rdkafka_response - # @return [Integer] - attr_reader :rdkafka_response - - - # Upon successful creation of Acl empty string will be returned as rdkafka_response_string - # @return [String] - attr_reader :rdkafka_response_string - - def initialize(rdkafka_response:, rdkafka_response_string:) - @rdkafka_response = rdkafka_response - if rdkafka_response_string != FFI::Pointer::NULL - @rdkafka_response_string = rdkafka_response_string.read_string - end - end - end - end -end diff --git a/lib/rdkafka/admin/create_partitions_handle.rb b/lib/rdkafka/admin/create_partitions_handle.rb deleted file mode 100644 index c38c632b..00000000 --- a/lib/rdkafka/admin/create_partitions_handle.rb +++ /dev/null @@ -1,27 +0,0 @@ -module Rdkafka - class Admin - class CreatePartitionsHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :error_string, :pointer, - :result_name, :pointer - - # @return [String] the name of the operation - def operation_name - "create partitions" - end - - # @return [Boolean] whether the create topic was successful - def create_result - CreatePartitionsReport.new(self[:error_string], self[:result_name]) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: CreateTopicReport.new(self[:error_string], self[:result_name]).error_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/create_partitions_report.rb b/lib/rdkafka/admin/create_partitions_report.rb deleted file mode 100644 index e7b48a51..00000000 --- a/lib/rdkafka/admin/create_partitions_report.rb +++ /dev/null @@ -1,6 +0,0 @@ -module Rdkafka - class Admin - class CreatePartitionsReport < CreateTopicReport - end - end -end diff --git a/lib/rdkafka/admin/create_topic_handle.rb b/lib/rdkafka/admin/create_topic_handle.rb index 460c6c00..2a5f506a 100644 --- a/lib/rdkafka/admin/create_topic_handle.rb +++ b/lib/rdkafka/admin/create_topic_handle.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Admin class CreateTopicHandle < AbstractHandle diff --git a/lib/rdkafka/admin/create_topic_report.rb b/lib/rdkafka/admin/create_topic_report.rb index 492e65dd..717546ab 100644 --- a/lib/rdkafka/admin/create_topic_report.rb +++ b/lib/rdkafka/admin/create_topic_report.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Admin class CreateTopicReport @@ -16,7 +14,7 @@ def initialize(error_string, result_name) @error_string = error_string.read_string end if result_name != FFI::Pointer::NULL - @result_name = result_name.read_string + @result_name = @result_name = result_name.read_string end end end diff --git a/lib/rdkafka/admin/delete_acl_handle.rb b/lib/rdkafka/admin/delete_acl_handle.rb deleted file mode 100644 index a7b45064..00000000 --- a/lib/rdkafka/admin/delete_acl_handle.rb +++ /dev/null @@ -1,30 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DeleteAclHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :response_string, :pointer, - :matching_acls, :pointer, - :matching_acls_count, :int - - # @return [String] the name of the operation - def operation_name - "delete acl" - end - - # @return [DeleteAclReport] instance with an array of matching_acls - def create_result - DeleteAclReport.new(matching_acls: self[:matching_acls], matching_acls_count: self[:matching_acls_count]) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: self[:response_string].read_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/delete_acl_report.rb b/lib/rdkafka/admin/delete_acl_report.rb deleted file mode 100644 index b7f194ea..00000000 --- a/lib/rdkafka/admin/delete_acl_report.rb +++ /dev/null @@ -1,23 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DeleteAclReport - - # deleted acls - # @return [Rdkafka::Bindings::AclBindingResult] - attr_reader :deleted_acls - - def initialize(matching_acls:, matching_acls_count:) - @deleted_acls=[] - if matching_acls != FFI::Pointer::NULL - acl_binding_result_pointers = matching_acls.read_array_of_pointer(matching_acls_count) - (1..matching_acls_count).map do |matching_acl_index| - acl_binding_result = AclBindingResult.new(acl_binding_result_pointers[matching_acl_index - 1]) - @deleted_acls << acl_binding_result - end - end - end - end - end -end diff --git a/lib/rdkafka/admin/delete_groups_handle.rb b/lib/rdkafka/admin/delete_groups_handle.rb deleted file mode 100644 index 78b2eebc..00000000 --- a/lib/rdkafka/admin/delete_groups_handle.rb +++ /dev/null @@ -1,28 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DeleteGroupsHandle < AbstractHandle - layout :pending, :bool, # TODO: ??? - :response, :int, - :error_string, :pointer, - :result_name, :pointer - - # @return [String] the name of the operation - def operation_name - "delete groups" - end - - def create_result - DeleteGroupsReport.new(self[:error_string], self[:result_name]) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: create_result.error_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/delete_groups_report.rb b/lib/rdkafka/admin/delete_groups_report.rb deleted file mode 100644 index 53eabffc..00000000 --- a/lib/rdkafka/admin/delete_groups_report.rb +++ /dev/null @@ -1,24 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DeleteGroupsReport - # Any error message generated from the DeleteTopic - # @return [String] - attr_reader :error_string - - # The name of the topic deleted - # @return [String] - attr_reader :result_name - - def initialize(error_string, result_name) - if error_string != FFI::Pointer::NULL - @error_string = error_string.read_string - end - if result_name != FFI::Pointer::NULL - @result_name = result_name.read_string - end - end - end - end -end diff --git a/lib/rdkafka/admin/delete_topic_handle.rb b/lib/rdkafka/admin/delete_topic_handle.rb index f0d1f1ec..bd86296e 100644 --- a/lib/rdkafka/admin/delete_topic_handle.rb +++ b/lib/rdkafka/admin/delete_topic_handle.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Admin class DeleteTopicHandle < AbstractHandle diff --git a/lib/rdkafka/admin/delete_topic_report.rb b/lib/rdkafka/admin/delete_topic_report.rb index 9ecbe212..0b2893fb 100644 --- a/lib/rdkafka/admin/delete_topic_report.rb +++ b/lib/rdkafka/admin/delete_topic_report.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Admin class DeleteTopicReport @@ -16,7 +14,7 @@ def initialize(error_string, result_name) @error_string = error_string.read_string end if result_name != FFI::Pointer::NULL - @result_name = result_name.read_string + @result_name = @result_name = result_name.read_string end end end diff --git a/lib/rdkafka/admin/describe_acl_handle.rb b/lib/rdkafka/admin/describe_acl_handle.rb deleted file mode 100644 index 46efbf1c..00000000 --- a/lib/rdkafka/admin/describe_acl_handle.rb +++ /dev/null @@ -1,30 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DescribeAclHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :response_string, :pointer, - :acls, :pointer, - :acls_count, :int - - # @return [String] the name of the operation. - def operation_name - "describe acl" - end - - # @return [DescribeAclReport] instance with an array of acls that matches the request filters. - def create_result - DescribeAclReport.new(acls: self[:acls], acls_count: self[:acls_count]) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: self[:response_string].read_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/describe_acl_report.rb b/lib/rdkafka/admin/describe_acl_report.rb deleted file mode 100644 index 92793ecb..00000000 --- a/lib/rdkafka/admin/describe_acl_report.rb +++ /dev/null @@ -1,24 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DescribeAclReport - - # acls that exists in the cluster for the resource_type, resource_name and pattern_type filters provided in the request. - # @return [Rdkafka::Bindings::AclBindingResult] array of matching acls. - attr_reader :acls - - def initialize(acls:, acls_count:) - @acls=[] - - if acls != FFI::Pointer::NULL - acl_binding_result_pointers = acls.read_array_of_pointer(acls_count) - (1..acls_count).map do |acl_index| - acl_binding_result = AclBindingResult.new(acl_binding_result_pointers[acl_index - 1]) - @acls << acl_binding_result - end - end - end - end - end -end diff --git a/lib/rdkafka/admin/describe_configs_handle.rb b/lib/rdkafka/admin/describe_configs_handle.rb deleted file mode 100644 index 93d3357a..00000000 --- a/lib/rdkafka/admin/describe_configs_handle.rb +++ /dev/null @@ -1,33 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DescribeConfigsHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :response_string, :pointer, - :config_entries, :pointer, - :entry_count, :int - - # @return [String] the name of the operation. - def operation_name - "describe configs" - end - - # @return [DescribeAclReport] instance with an array of acls that matches the request filters. - def create_result - DescribeConfigsReport.new( - config_entries: self[:config_entries], - entry_count: self[:entry_count] - ) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: self[:response_string].read_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/describe_configs_report.rb b/lib/rdkafka/admin/describe_configs_report.rb deleted file mode 100644 index 77eeb559..00000000 --- a/lib/rdkafka/admin/describe_configs_report.rb +++ /dev/null @@ -1,54 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class DescribeConfigsReport - attr_reader :resources - - def initialize(config_entries:, entry_count:) - @resources=[] - - return if config_entries == FFI::Pointer::NULL - - config_entries - .read_array_of_pointer(entry_count) - .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) } - .each do |config_resource_result_ptr| - config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr) - - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - configs_ptr = Bindings.rd_kafka_ConfigResource_configs( - config_resource_result_ptr, - pointer_to_size_t - ) - - configs_ptr - .read_array_of_pointer(pointer_to_size_t.read_int) - .map { |config_ptr| ConfigBindingResult.new(config_ptr) } - .each { |config_binding| config_resource_result.configs << config_binding } - - @resources << config_resource_result - end - ensure - return if config_entries == FFI::Pointer::NULL - - Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count) - end - - private - - def validate!(config_resource_result_ptr) - code = Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr) - - return if code.zero? - - raise( - RdkafkaError.new( - code, - Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr) - ) - ) - end - end - end -end diff --git a/lib/rdkafka/admin/incremental_alter_configs_handle.rb b/lib/rdkafka/admin/incremental_alter_configs_handle.rb deleted file mode 100644 index a3276384..00000000 --- a/lib/rdkafka/admin/incremental_alter_configs_handle.rb +++ /dev/null @@ -1,33 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class IncrementalAlterConfigsHandle < AbstractHandle - layout :pending, :bool, - :response, :int, - :response_string, :pointer, - :config_entries, :pointer, - :entry_count, :int - - # @return [String] the name of the operation. - def operation_name - "incremental alter configs" - end - - # @return [DescribeAclReport] instance with an array of acls that matches the request filters. - def create_result - IncrementalAlterConfigsReport.new( - config_entries: self[:config_entries], - entry_count: self[:entry_count] - ) - end - - def raise_error - raise RdkafkaError.new( - self[:response], - broker_message: self[:response_string].read_string - ) - end - end - end -end diff --git a/lib/rdkafka/admin/incremental_alter_configs_report.rb b/lib/rdkafka/admin/incremental_alter_configs_report.rb deleted file mode 100644 index 2b25837b..00000000 --- a/lib/rdkafka/admin/incremental_alter_configs_report.rb +++ /dev/null @@ -1,54 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - class Admin - class IncrementalAlterConfigsReport - attr_reader :resources - - def initialize(config_entries:, entry_count:) - @resources=[] - - return if config_entries == FFI::Pointer::NULL - - config_entries - .read_array_of_pointer(entry_count) - .each { |config_resource_result_ptr| validate!(config_resource_result_ptr) } - .each do |config_resource_result_ptr| - config_resource_result = ConfigResourceBindingResult.new(config_resource_result_ptr) - - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - configs_ptr = Bindings.rd_kafka_ConfigResource_configs( - config_resource_result_ptr, - pointer_to_size_t - ) - - configs_ptr - .read_array_of_pointer(pointer_to_size_t.read_int) - .map { |config_ptr| ConfigBindingResult.new(config_ptr) } - .each { |config_binding| config_resource_result.configs << config_binding } - - @resources << config_resource_result - end - ensure - return if config_entries == FFI::Pointer::NULL - - Bindings.rd_kafka_ConfigResource_destroy_array(config_entries, entry_count) - end - - private - - def validate!(config_resource_result_ptr) - code = Bindings.rd_kafka_ConfigResource_error(config_resource_result_ptr) - - return if code.zero? - - raise( - RdkafkaError.new( - code, - Bindings.rd_kafka_ConfigResource_error_string(config_resource_result_ptr) - ) - ) - end - end - end -end diff --git a/lib/rdkafka/bindings.rb b/lib/rdkafka/bindings.rb index e2692199..ea760467 100644 --- a/lib/rdkafka/bindings.rb +++ b/lib/rdkafka/bindings.rb @@ -1,4 +1,6 @@ -# frozen_string_literal: true +require "ffi" +require "json" +require "logger" module Rdkafka # @private @@ -13,11 +15,10 @@ def self.lib_extension end end - ffi_lib File.join(__dir__, "../../ext/librdkafka.#{lib_extension}") + ffi_lib File.join(File.dirname(__FILE__), "../../ext/librdkafka.#{lib_extension}") RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS = -175 RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS = -174 - RD_KAFKA_RESP_ERR__STATE = -172 RD_KAFKA_RESP_ERR__NOENT = -156 RD_KAFKA_RESP_ERR_NO_ERROR = 0 @@ -32,17 +33,15 @@ class SizePtr < FFI::Struct # Polling - attach_function :rd_kafka_flush, [:pointer, :int], :int, blocking: true - attach_function :rd_kafka_poll, [:pointer, :int], :int, blocking: true + attach_function :rd_kafka_poll, [:pointer, :int], :void, blocking: true attach_function :rd_kafka_outq_len, [:pointer], :int, blocking: true # Metadata - attach_function :rd_kafka_name, [:pointer], :string, blocking: true - attach_function :rd_kafka_memberid, [:pointer], :string, blocking: true - attach_function :rd_kafka_clusterid, [:pointer], :string, blocking: true - attach_function :rd_kafka_metadata, [:pointer, :int, :pointer, :pointer, :int], :int, blocking: true - attach_function :rd_kafka_metadata_destroy, [:pointer], :void, blocking: true + attach_function :rd_kafka_memberid, [:pointer], :string + attach_function :rd_kafka_clusterid, [:pointer], :string + attach_function :rd_kafka_metadata, [:pointer, :int, :pointer, :pointer, :int], :int + attach_function :rd_kafka_metadata_destroy, [:pointer], :void # Message struct @@ -89,58 +88,10 @@ class TopicPartitionList < FFI::Struct attach_function :rd_kafka_topic_partition_list_destroy, [:pointer], :void attach_function :rd_kafka_topic_partition_list_copy, [:pointer], :pointer - # Configs management - # - # Structs for management of configurations - # Each configuration is attached to a resource and one resource can have many configuration - # details. Each resource will also have separate errors results if obtaining configuration - # was not possible for any reason - class ConfigResource < FFI::Struct - layout :type, :int, - :name, :string - end - - attach_function :rd_kafka_DescribeConfigs, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true - attach_function :rd_kafka_ConfigResource_new, [:int32, :pointer], :pointer - attach_function :rd_kafka_ConfigResource_destroy_array, [:pointer, :int32], :void - attach_function :rd_kafka_event_DescribeConfigs_result, [:pointer], :pointer - attach_function :rd_kafka_DescribeConfigs_result_resources, [:pointer, :pointer], :pointer - attach_function :rd_kafka_ConfigResource_configs, [:pointer, :pointer], :pointer - attach_function :rd_kafka_ConfigEntry_name, [:pointer], :string - attach_function :rd_kafka_ConfigEntry_value, [:pointer], :string - attach_function :rd_kafka_ConfigEntry_is_read_only, [:pointer], :int - attach_function :rd_kafka_ConfigEntry_is_default, [:pointer], :int - attach_function :rd_kafka_ConfigEntry_is_sensitive, [:pointer], :int - attach_function :rd_kafka_ConfigEntry_is_synonym, [:pointer], :int - attach_function :rd_kafka_ConfigEntry_synonyms, [:pointer, :pointer], :pointer - attach_function :rd_kafka_ConfigResource_error, [:pointer], :int - attach_function :rd_kafka_ConfigResource_error_string, [:pointer], :string - attach_function :rd_kafka_IncrementalAlterConfigs, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true - attach_function :rd_kafka_IncrementalAlterConfigs_result_resources, [:pointer, :pointer], :pointer - attach_function :rd_kafka_ConfigResource_add_incremental_config, [:pointer, :string, :int32, :string], :pointer - attach_function :rd_kafka_event_IncrementalAlterConfigs_result, [:pointer], :pointer - - RD_KAFKA_ADMIN_OP_DESCRIBECONFIGS = 5 - RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT = 104 - - RD_KAFKA_ADMIN_OP_INCREMENTALALTERCONFIGS = 16 - RD_KAFKA_EVENT_INCREMENTALALTERCONFIGS_RESULT = 131072 - - RD_KAFKA_ALTER_CONFIG_OP_TYPE_SET = 0 - RD_KAFKA_ALTER_CONFIG_OP_TYPE_DELETE = 1 - RD_KAFKA_ALTER_CONFIG_OP_TYPE_APPEND = 2 - RD_KAFKA_ALTER_CONFIG_OP_TYPE_SUBTRACT = 3 - # Errors - class NativeErrorDesc < FFI::Struct - layout :code, :int, - :name, :pointer, - :desc, :pointer - end attach_function :rd_kafka_err2name, [:int], :string attach_function :rd_kafka_err2str, [:int], :string - attach_function :rd_kafka_get_err_descs, [:pointer, :pointer], :void # Configuration @@ -159,37 +110,28 @@ class NativeErrorDesc < FFI::Struct attach_function :rd_kafka_conf_set_stats_cb, [:pointer, :stats_cb], :void callback :error_cb, [:pointer, :int, :string, :pointer], :void attach_function :rd_kafka_conf_set_error_cb, [:pointer, :error_cb], :void - attach_function :rd_kafka_rebalance_protocol, [:pointer], :string - callback :oauthbearer_token_refresh_cb, [:pointer, :string, :pointer], :void - attach_function :rd_kafka_conf_set_oauthbearer_token_refresh_cb, [:pointer, :oauthbearer_token_refresh_cb], :void - attach_function :rd_kafka_oauthbearer_set_token, [:pointer, :string, :int64, :pointer, :pointer, :int, :pointer, :int], :int - attach_function :rd_kafka_oauthbearer_set_token_failure, [:pointer, :string], :int + # Log queue attach_function :rd_kafka_set_log_queue, [:pointer, :pointer], :void attach_function :rd_kafka_queue_get_main, [:pointer], :pointer - # Per topic configs - attach_function :rd_kafka_topic_conf_new, [], :pointer - attach_function :rd_kafka_topic_conf_set, [:pointer, :string, :string, :pointer, :int], :kafka_config_response LogCallback = FFI::Function.new( :void, [:pointer, :int, :string, :string] ) do |_client_ptr, level, _level_string, line| severity = case level - when 0, 1, 2 + when 0 || 1 || 2 Logger::FATAL when 3 Logger::ERROR when 4 Logger::WARN - when 5, 6 + when 5 || 6 Logger::INFO when 7 Logger::DEBUG else Logger::UNKNOWN end - - Rdkafka::Config.ensure_log_thread Rdkafka::Config.log_queue << [severity, "rdkafka: #{line}"] end @@ -211,37 +153,10 @@ class NativeErrorDesc < FFI::Struct ) do |_client_prr, err_code, reason, _opaque| if Rdkafka::Config.error_callback error = Rdkafka::RdkafkaError.new(err_code, broker_message: reason) - error.set_backtrace(caller) Rdkafka::Config.error_callback.call(error) end end - # The OAuth callback is currently global and contextless. - # This means that the callback will be called for all instances, and the callback must be able to determine to which instance it is associated. - # The instance name will be provided in the callback, allowing the callback to reference the correct instance. - # - # An example of how to use the instance name in the callback is given below. - # The `refresh_token` is configured as the `oauthbearer_token_refresh_callback`. - # `instances` is a map of client names to client instances, maintained by the user. - # - # ``` - # def refresh_token(config, client_name) - # client = instances[client_name] - # client.oauthbearer_set_token( - # token: 'new-token-value', - # lifetime_ms: token-lifetime-ms, - # principal_name: 'principal-name' - # ) - # end - # ``` - OAuthbearerTokenRefreshCallback = FFI::Function.new( - :void, [:pointer, :string, :pointer] - ) do |client_ptr, config, _opaque| - if Rdkafka::Config.oauthbearer_token_refresh_callback - Rdkafka::Config.oauthbearer_token_refresh_callback.call(config, Rdkafka::Bindings.rd_kafka_name(client_ptr)) - end - end - # Handle enum :kafka_type, [ @@ -250,33 +165,24 @@ class NativeErrorDesc < FFI::Struct ] attach_function :rd_kafka_new, [:kafka_type, :pointer, :pointer, :int], :pointer - attach_function :rd_kafka_destroy, [:pointer], :void # Consumer - attach_function :rd_kafka_subscribe, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_unsubscribe, [:pointer], :int, blocking: true - attach_function :rd_kafka_subscription, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_assign, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_incremental_assign, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_incremental_unassign, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_assignment, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_assignment_lost, [:pointer], :int, blocking: true - attach_function :rd_kafka_committed, [:pointer, :pointer, :int], :int, blocking: true + attach_function :rd_kafka_subscribe, [:pointer, :pointer], :int + attach_function :rd_kafka_unsubscribe, [:pointer], :int + attach_function :rd_kafka_subscription, [:pointer, :pointer], :int + attach_function :rd_kafka_assign, [:pointer, :pointer], :int + attach_function :rd_kafka_assignment, [:pointer, :pointer], :int + attach_function :rd_kafka_committed, [:pointer, :pointer, :int], :int attach_function :rd_kafka_commit, [:pointer, :pointer, :bool], :int, blocking: true - attach_function :rd_kafka_poll_set_consumer, [:pointer], :void, blocking: true + attach_function :rd_kafka_poll_set_consumer, [:pointer], :void attach_function :rd_kafka_consumer_poll, [:pointer, :int], :pointer, blocking: true attach_function :rd_kafka_consumer_close, [:pointer], :void, blocking: true - attach_function :rd_kafka_offsets_store, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_pause_partitions, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int, blocking: true - attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int, blocking: true - attach_function :rd_kafka_offsets_for_times, [:pointer, :pointer, :int], :int, blocking: true - attach_function :rd_kafka_position, [:pointer, :pointer], :int, blocking: true - # those two are used for eos support - attach_function :rd_kafka_consumer_group_metadata, [:pointer], :pointer, blocking: true - attach_function :rd_kafka_consumer_group_metadata_destroy, [:pointer], :void, blocking: true + attach_function :rd_kafka_offset_store, [:pointer, :int32, :int64], :int + attach_function :rd_kafka_pause_partitions, [:pointer, :pointer], :int + attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int + attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int # Headers attach_function :rd_kafka_header_get_all, [:pointer, :size_t, :pointer, :pointer, SizePtr], :int @@ -285,36 +191,30 @@ class NativeErrorDesc < FFI::Struct # Rebalance callback :rebalance_cb_function, [:pointer, :int, :pointer, :pointer], :void - attach_function :rd_kafka_conf_set_rebalance_cb, [:pointer, :rebalance_cb_function], :void, blocking: true + attach_function :rd_kafka_conf_set_rebalance_cb, [:pointer, :rebalance_cb_function], :void RebalanceCallback = FFI::Function.new( :void, [:pointer, :int, :pointer, :pointer] ) do |client_ptr, code, partitions_ptr, opaque_ptr| case code when RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS - if Rdkafka::Bindings.rd_kafka_rebalance_protocol(client_ptr) == "COOPERATIVE" - Rdkafka::Bindings.rd_kafka_incremental_assign(client_ptr, partitions_ptr) - else - Rdkafka::Bindings.rd_kafka_assign(client_ptr, partitions_ptr) - end + Rdkafka::Bindings.rd_kafka_assign(client_ptr, partitions_ptr) else # RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS or errors - if Rdkafka::Bindings.rd_kafka_rebalance_protocol(client_ptr) == "COOPERATIVE" - Rdkafka::Bindings.rd_kafka_incremental_unassign(client_ptr, partitions_ptr) - else - Rdkafka::Bindings.rd_kafka_assign(client_ptr, FFI::Pointer::NULL) - end + Rdkafka::Bindings.rd_kafka_assign(client_ptr, FFI::Pointer::NULL) end opaque = Rdkafka::Config.opaques[opaque_ptr.to_i] return unless opaque tpl = Rdkafka::Consumer::TopicPartitionList.from_native_tpl(partitions_ptr).freeze + consumer = Rdkafka::Consumer.new(client_ptr) + begin case code when RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS - opaque.call_on_partitions_assigned(tpl) + opaque.call_on_partitions_assigned(consumer, tpl) when RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS - opaque.call_on_partitions_revoked(tpl) + opaque.call_on_partitions_revoked(consumer, tpl) end rescue Exception => err Rdkafka::Config.logger.error("Unhandled exception: #{err.class} - #{err.message}") @@ -338,32 +238,22 @@ class NativeErrorDesc < FFI::Struct RD_KAFKA_VTYPE_TIMESTAMP = 8 RD_KAFKA_VTYPE_HEADER = 9 RD_KAFKA_VTYPE_HEADERS = 10 - RD_KAFKA_PURGE_F_QUEUE = 1 - RD_KAFKA_PURGE_F_INFLIGHT = 2 RD_KAFKA_MSG_F_COPY = 0x2 - attach_function :rd_kafka_producev, [:pointer, :varargs], :int, blocking: true - attach_function :rd_kafka_purge, [:pointer, :int], :int, blocking: true + attach_function :rd_kafka_producev, [:pointer, :varargs], :int callback :delivery_cb, [:pointer, :pointer, :pointer], :void attach_function :rd_kafka_conf_set_dr_msg_cb, [:pointer, :delivery_cb], :void # Partitioner - PARTITIONERS = %w(random consistent consistent_random murmur2 murmur2_random fnv1a fnv1a_random).each_with_object({}) do |name, hsh| - method_name = "rd_kafka_msg_partitioner_#{name}".to_sym - attach_function method_name, [:pointer, :pointer, :size_t, :int32, :pointer, :pointer], :int32 - hsh[name] = method_name - end + attach_function :rd_kafka_msg_partitioner_consistent_random, [:pointer, :pointer, :size_t, :int32, :pointer, :pointer], :int32 - def self.partitioner(str, partition_count, partitioner_name = "consistent_random") + def self.partitioner(str, partition_count) # Return RD_KAFKA_PARTITION_UA(unassigned partition) when partition count is nil/zero. return -1 unless partition_count&.nonzero? - str_ptr = str.empty? ? FFI::MemoryPointer::NULL : FFI::MemoryPointer.from_string(str) - method_name = PARTITIONERS.fetch(partitioner_name) do - raise Rdkafka::Config::ConfigError.new("Unknown partitioner: #{partitioner_name}") - end - public_send(method_name, nil, str_ptr, str.size > 0 ? str.size : 1, partition_count, nil, nil) + str_ptr = FFI::MemoryPointer.from_string(str) + rd_kafka_msg_partitioner_consistent_random(nil, str_ptr, str.size, partition_count, nil, nil) end # Create Topics @@ -371,44 +261,23 @@ def self.partitioner(str, partition_count, partitioner_name = "consistent_random RD_KAFKA_ADMIN_OP_CREATETOPICS = 1 # rd_kafka_admin_op_t RD_KAFKA_EVENT_CREATETOPICS_RESULT = 100 # rd_kafka_event_type_t - attach_function :rd_kafka_CreateTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true - attach_function :rd_kafka_NewTopic_new, [:pointer, :size_t, :size_t, :pointer, :size_t], :pointer, blocking: true - attach_function :rd_kafka_NewTopic_set_config, [:pointer, :string, :string], :int32, blocking: true - attach_function :rd_kafka_NewTopic_destroy, [:pointer], :void, blocking: true - attach_function :rd_kafka_event_CreateTopics_result, [:pointer], :pointer, blocking: true - attach_function :rd_kafka_CreateTopics_result_topics, [:pointer, :pointer], :pointer, blocking: true + attach_function :rd_kafka_CreateTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :void + attach_function :rd_kafka_NewTopic_new, [:pointer, :size_t, :size_t, :pointer, :size_t], :pointer + attach_function :rd_kafka_NewTopic_set_config, [:pointer, :string, :string], :int32 + attach_function :rd_kafka_NewTopic_destroy, [:pointer], :void + attach_function :rd_kafka_event_CreateTopics_result, [:pointer], :pointer + attach_function :rd_kafka_CreateTopics_result_topics, [:pointer, :pointer], :pointer # Delete Topics RD_KAFKA_ADMIN_OP_DELETETOPICS = 2 # rd_kafka_admin_op_t RD_KAFKA_EVENT_DELETETOPICS_RESULT = 101 # rd_kafka_event_type_t - attach_function :rd_kafka_DeleteTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :int32, blocking: true - attach_function :rd_kafka_DeleteTopic_new, [:pointer], :pointer, blocking: true - attach_function :rd_kafka_DeleteTopic_destroy, [:pointer], :void, blocking: true - attach_function :rd_kafka_event_DeleteTopics_result, [:pointer], :pointer, blocking: true - attach_function :rd_kafka_DeleteTopics_result_topics, [:pointer, :pointer], :pointer, blocking: true - - # Create partitions - RD_KAFKA_ADMIN_OP_CREATEPARTITIONS = 3 - RD_KAFKA_ADMIN_OP_CREATEPARTITIONS_RESULT = 102 - - attach_function :rd_kafka_CreatePartitions, [:pointer, :pointer, :size_t, :pointer, :pointer], :void - attach_function :rd_kafka_NewPartitions_new, %i[pointer size_t pointer size_t], :pointer - attach_function :rd_kafka_NewPartitions_destroy, [:pointer], :void - attach_function :rd_kafka_event_CreatePartitions_result, [:pointer], :pointer - attach_function :rd_kafka_CreatePartitions_result_topics, [:pointer, :pointer], :pointer - - # Delete Group - - RD_KAFKA_ADMIN_OP_DELETEGROUPS = 7 # rd_kafka_admin_op_t - RD_KAFKA_EVENT_DELETEGROUPS_RESULT = 106 # rd_kafka_event_type_t - - attach_function :rd_kafka_DeleteGroups, [:pointer, :pointer, :size_t, :pointer, :pointer], :void, blocking: true - attach_function :rd_kafka_DeleteGroup_new, [:pointer], :pointer, blocking: true - attach_function :rd_kafka_DeleteGroup_destroy, [:pointer], :void, blocking: true - attach_function :rd_kafka_event_DeleteGroups_result, [:pointer], :pointer, blocking: true # rd_kafka_event_t* => rd_kafka_DeleteGroups_result_t* - attach_function :rd_kafka_DeleteGroups_result_groups, [:pointer, :pointer], :pointer, blocking: true # rd_kafka_DeleteGroups_result_t*, size_t* => rd_kafka_group_result_t** + attach_function :rd_kafka_DeleteTopics, [:pointer, :pointer, :size_t, :pointer, :pointer], :int32 + attach_function :rd_kafka_DeleteTopic_new, [:pointer], :pointer + attach_function :rd_kafka_DeleteTopic_destroy, [:pointer], :void + attach_function :rd_kafka_event_DeleteTopics_result, [:pointer], :pointer + attach_function :rd_kafka_DeleteTopics_result_topics, [:pointer, :pointer], :pointer # Background Queue and Callback @@ -432,103 +301,5 @@ def self.partitioner(str, partition_count, partitioner_name = "consistent_random attach_function :rd_kafka_topic_result_error, [:pointer], :int32 attach_function :rd_kafka_topic_result_error_string, [:pointer], :pointer attach_function :rd_kafka_topic_result_name, [:pointer], :pointer - - # Create Acls - - RD_KAFKA_ADMIN_OP_CREATEACLS = 9 - RD_KAFKA_EVENT_CREATEACLS_RESULT = 1024 - - attach_function :rd_kafka_CreateAcls, [:pointer, :pointer, :size_t, :pointer, :pointer], :void - attach_function :rd_kafka_event_CreateAcls_result, [:pointer], :pointer - attach_function :rd_kafka_CreateAcls_result_acls, [:pointer, :pointer], :pointer - - # Delete Acls - - RD_KAFKA_ADMIN_OP_DELETEACLS = 11 - RD_KAFKA_EVENT_DELETEACLS_RESULT = 4096 - - attach_function :rd_kafka_DeleteAcls, [:pointer, :pointer, :size_t, :pointer, :pointer], :void - attach_function :rd_kafka_event_DeleteAcls_result, [:pointer], :pointer - attach_function :rd_kafka_DeleteAcls_result_responses, [:pointer, :pointer], :pointer - attach_function :rd_kafka_DeleteAcls_result_response_error, [:pointer], :pointer - attach_function :rd_kafka_DeleteAcls_result_response_matching_acls, [:pointer, :pointer], :pointer - - # Describe Acls - - RD_KAFKA_ADMIN_OP_DESCRIBEACLS = 10 - RD_KAFKA_EVENT_DESCRIBEACLS_RESULT = 2048 - - attach_function :rd_kafka_DescribeAcls, [:pointer, :pointer, :pointer, :pointer], :void - attach_function :rd_kafka_event_DescribeAcls_result, [:pointer], :pointer - attach_function :rd_kafka_DescribeAcls_result_acls, [:pointer, :pointer], :pointer - - # Acl Bindings - - attach_function :rd_kafka_AclBinding_restype, [:pointer], :int32 - attach_function :rd_kafka_AclBinding_name, [:pointer], :pointer - attach_function :rd_kafka_AclBinding_resource_pattern_type, [:pointer], :int32 - attach_function :rd_kafka_AclBinding_principal, [:pointer], :pointer - attach_function :rd_kafka_AclBinding_host, [:pointer], :pointer - attach_function :rd_kafka_AclBinding_operation, [:pointer], :int32 - attach_function :rd_kafka_AclBinding_permission_type, [:pointer], :int32 - attach_function :rd_kafka_AclBinding_new, [:int32, :pointer, :int32, :pointer, :pointer, :int32, :int32, :pointer, :size_t ], :pointer - attach_function :rd_kafka_AclBindingFilter_new, [:int32, :pointer, :int32, :pointer, :pointer, :int32, :int32, :pointer, :size_t ], :pointer - attach_function :rd_kafka_AclBinding_destroy, [:pointer], :void - - # rd_kafka_ResourceType_t - https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7307 - - RD_KAFKA_RESOURCE_ANY = 1 - RD_KAFKA_RESOURCE_TOPIC = 2 - RD_KAFKA_RESOURCE_GROUP = 3 - RD_KAFKA_RESOURCE_BROKER = 4 - - # rd_kafka_ResourcePatternType_t - https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L7320 - - RD_KAFKA_RESOURCE_PATTERN_ANY = 1 - RD_KAFKA_RESOURCE_PATTERN_MATCH = 2 - RD_KAFKA_RESOURCE_PATTERN_LITERAL = 3 - RD_KAFKA_RESOURCE_PATTERN_PREFIXED = 4 - - # rd_kafka_AclOperation_t - https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8403 - - RD_KAFKA_ACL_OPERATION_ANY = 1 - RD_KAFKA_ACL_OPERATION_ALL = 2 - RD_KAFKA_ACL_OPERATION_READ = 3 - RD_KAFKA_ACL_OPERATION_WRITE = 4 - RD_KAFKA_ACL_OPERATION_CREATE = 5 - RD_KAFKA_ACL_OPERATION_DELETE = 6 - RD_KAFKA_ACL_OPERATION_ALTER = 7 - RD_KAFKA_ACL_OPERATION_DESCRIBE = 8 - RD_KAFKA_ACL_OPERATION_CLUSTER_ACTION = 9 - RD_KAFKA_ACL_OPERATION_DESCRIBE_CONFIGS = 10 - RD_KAFKA_ACL_OPERATION_ALTER_CONFIGS = 11 - RD_KAFKA_ACL_OPERATION_IDEMPOTENT_WRITE = 12 - - # rd_kafka_AclPermissionType_t - https://github.com/confluentinc/librdkafka/blob/292d2a66b9921b783f08147807992e603c7af059/src/rdkafka.h#L8435 - - RD_KAFKA_ACL_PERMISSION_TYPE_ANY = 1 - RD_KAFKA_ACL_PERMISSION_TYPE_DENY = 2 - RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW = 3 - - # Extracting error details from Acl results - attach_function :rd_kafka_acl_result_error, [:pointer], :pointer - attach_function :rd_kafka_error_code, [:pointer], :int32 - attach_function :rd_kafka_error_string, [:pointer], :pointer - attach_function :rd_kafka_event_error, [:pointer], :int32 - attach_function :rd_kafka_event_error_string, [:pointer], :pointer - attach_function :rd_kafka_AclBinding_error, [:pointer], :pointer - - - # Extracting data from group results - class NativeError < FFI::Struct # rd_kafka_error_t - layout :code, :int32, - :errstr, :pointer, - :fatal, :u_int8_t, - :retriable, :u_int8_t, - :txn_requires_abort, :u_int8_t - end - - attach_function :rd_kafka_group_result_error, [:pointer], NativeError.by_ref # rd_kafka_group_result_t* => rd_kafka_error_t* - attach_function :rd_kafka_group_result_name, [:pointer], :pointer end end diff --git a/lib/rdkafka/callbacks.rb b/lib/rdkafka/callbacks.rb index a1034418..81a535b1 100644 --- a/lib/rdkafka/callbacks.rb +++ b/lib/rdkafka/callbacks.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka module Callbacks @@ -23,132 +21,6 @@ def self.create_topic_results_from_array(count, array_pointer) end end - class GroupResult - attr_reader :result_error, :error_string, :result_name - def initialize(group_result_pointer) - native_error = Rdkafka::Bindings.rd_kafka_group_result_error(group_result_pointer) - - if native_error.null? - @result_error = 0 - @error_string = FFI::Pointer::NULL - else - @result_error = native_error[:code] - @error_string = native_error[:errstr] - end - - @result_name = Rdkafka::Bindings.rd_kafka_group_result_name(group_result_pointer) - end - def self.create_group_results_from_array(count, array_pointer) - (1..count).map do |index| - result_pointer = (array_pointer + (index - 1)).read_pointer - new(result_pointer) - end - end - end - - # Extracts attributes of rd_kafka_acl_result_t - # - # @private - class CreateAclResult - attr_reader :result_error, :error_string - - def initialize(acl_result_pointer) - rd_kafka_error_pointer = Bindings.rd_kafka_acl_result_error(acl_result_pointer) - @result_error = Rdkafka::Bindings.rd_kafka_error_code(rd_kafka_error_pointer) - @error_string = Rdkafka::Bindings.rd_kafka_error_string(rd_kafka_error_pointer) - end - - def self.create_acl_results_from_array(count, array_pointer) - (1..count).map do |index| - result_pointer = (array_pointer + (index - 1)).read_pointer - new(result_pointer) - end - end - end - - # Extracts attributes of rd_kafka_DeleteAcls_result_response_t - # - # @private - class DeleteAclResult - attr_reader :result_error, :error_string, :matching_acls, :matching_acls_count - - def initialize(acl_result_pointer) - @matching_acls=[] - rd_kafka_error_pointer = Rdkafka::Bindings.rd_kafka_DeleteAcls_result_response_error(acl_result_pointer) - @result_error = Rdkafka::Bindings.rd_kafka_error_code(rd_kafka_error_pointer) - @error_string = Rdkafka::Bindings.rd_kafka_error_string(rd_kafka_error_pointer) - if @result_error == 0 - # Get the number of matching acls - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - @matching_acls = Rdkafka::Bindings.rd_kafka_DeleteAcls_result_response_matching_acls(acl_result_pointer, pointer_to_size_t) - @matching_acls_count = pointer_to_size_t.read_int - end - end - - def self.delete_acl_results_from_array(count, array_pointer) - (1..count).map do |index| - result_pointer = (array_pointer + (index - 1)).read_pointer - new(result_pointer) - end - end - end - - # Extracts attributes of rd_kafka_DeleteAcls_result_response_t - # - # @private - class DescribeAclResult - attr_reader :result_error, :error_string, :matching_acls, :matching_acls_count - - def initialize(event_ptr) - @matching_acls=[] - @result_error = Rdkafka::Bindings.rd_kafka_event_error(event_ptr) - @error_string = Rdkafka::Bindings.rd_kafka_event_error_string(event_ptr) - if @result_error == 0 - acl_describe_result = Rdkafka::Bindings.rd_kafka_event_DescribeAcls_result(event_ptr) - # Get the number of matching acls - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - @matching_acls = Rdkafka::Bindings.rd_kafka_DescribeAcls_result_acls(acl_describe_result, pointer_to_size_t) - @matching_acls_count = pointer_to_size_t.read_int - end - end - end - - class DescribeConfigsResult - attr_reader :result_error, :error_string, :results, :results_count - - def initialize(event_ptr) - @results=[] - @result_error = Rdkafka::Bindings.rd_kafka_event_error(event_ptr) - @error_string = Rdkafka::Bindings.rd_kafka_event_error_string(event_ptr) - - if @result_error == 0 - configs_describe_result = Rdkafka::Bindings.rd_kafka_event_DescribeConfigs_result(event_ptr) - # Get the number of matching acls - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - @results = Rdkafka::Bindings.rd_kafka_DescribeConfigs_result_resources(configs_describe_result, pointer_to_size_t) - @results_count = pointer_to_size_t.read_int - end - end - end - - class IncrementalAlterConfigsResult - attr_reader :result_error, :error_string, :results, :results_count - - def initialize(event_ptr) - @results=[] - @result_error = Rdkafka::Bindings.rd_kafka_event_error(event_ptr) - @error_string = Rdkafka::Bindings.rd_kafka_event_error_string(event_ptr) - - if @result_error == 0 - incremental_alter_result = Rdkafka::Bindings.rd_kafka_event_IncrementalAlterConfigs_result(event_ptr) - # Get the number of matching acls - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - @results = Rdkafka::Bindings.rd_kafka_IncrementalAlterConfigs_result_resources(incremental_alter_result, pointer_to_size_t) - @results_count = pointer_to_size_t.read_int - end - end - end - # FFI Function used for Create Topic and Delete Topic callbacks BackgroundEventCallbackFunction = FFI::Function.new( :void, [:pointer, :pointer, :pointer] @@ -159,25 +31,11 @@ def initialize(event_ptr) # @private class BackgroundEventCallback def self.call(_, event_ptr, _) - case Rdkafka::Bindings.rd_kafka_event_type(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_CREATETOPICS_RESULT + event_type = Rdkafka::Bindings.rd_kafka_event_type(event_ptr) + if event_type == Rdkafka::Bindings::RD_KAFKA_EVENT_CREATETOPICS_RESULT process_create_topic(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT - process_describe_configs(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_INCREMENTALALTERCONFIGS_RESULT - process_incremental_alter_configs(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_DELETETOPICS_RESULT + elsif event_type == Rdkafka::Bindings::RD_KAFKA_EVENT_DELETETOPICS_RESULT process_delete_topic(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATEPARTITIONS_RESULT - process_create_partitions(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_CREATEACLS_RESULT - process_create_acl(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_DELETEACLS_RESULT - process_delete_acl(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_DESCRIBEACLS_RESULT - process_describe_acl(event_ptr) - when Rdkafka::Bindings::RD_KAFKA_EVENT_DELETEGROUPS_RESULT - process_delete_groups(event_ptr) end end @@ -196,62 +54,7 @@ def self.process_create_topic(event_ptr) create_topic_handle[:response] = create_topic_results[0].result_error create_topic_handle[:error_string] = create_topic_results[0].error_string create_topic_handle[:result_name] = create_topic_results[0].result_name - - create_topic_handle.unlock - end - end - - def self.process_describe_configs(event_ptr) - describe_configs = DescribeConfigsResult.new(event_ptr) - describe_configs_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if describe_configs_handle = Rdkafka::Admin::DescribeConfigsHandle.remove(describe_configs_handle_ptr.address) - describe_configs_handle[:response] = describe_configs.result_error - describe_configs_handle[:response_string] = describe_configs.error_string - describe_configs_handle[:pending] = false - - if describe_configs.result_error == 0 - describe_configs_handle[:config_entries] = describe_configs.results - describe_configs_handle[:entry_count] = describe_configs.results_count - end - - describe_configs_handle.unlock - end - end - - def self.process_incremental_alter_configs(event_ptr) - incremental_alter = IncrementalAlterConfigsResult.new(event_ptr) - incremental_alter_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if incremental_alter_handle = Rdkafka::Admin::IncrementalAlterConfigsHandle.remove(incremental_alter_handle_ptr.address) - incremental_alter_handle[:response] = incremental_alter.result_error - incremental_alter_handle[:response_string] = incremental_alter.error_string - incremental_alter_handle[:pending] = false - - if incremental_alter.result_error == 0 - incremental_alter_handle[:config_entries] = incremental_alter.results - incremental_alter_handle[:entry_count] = incremental_alter.results_count - end - - incremental_alter_handle.unlock - end - end - - def self.process_delete_groups(event_ptr) - delete_groups_result = Rdkafka::Bindings.rd_kafka_event_DeleteGroups_result(event_ptr) - - # Get the number of delete group results - pointer_to_size_t = FFI::MemoryPointer.new(:size_t) - delete_group_result_array = Rdkafka::Bindings.rd_kafka_DeleteGroups_result_groups(delete_groups_result, pointer_to_size_t) - delete_group_results = GroupResult.create_group_results_from_array(pointer_to_size_t.read_int, delete_group_result_array) # TODO fix this - delete_group_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if (delete_group_handle = Rdkafka::Admin::DeleteGroupsHandle.remove(delete_group_handle_ptr.address)) - delete_group_handle[:response] = delete_group_results[0].result_error - delete_group_handle[:error_string] = delete_group_results[0].error_string - delete_group_handle[:result_name] = delete_group_results[0].result_name - - delete_group_handle.unlock + create_topic_handle[:pending] = false end end @@ -268,87 +71,13 @@ def self.process_delete_topic(event_ptr) delete_topic_handle[:response] = delete_topic_results[0].result_error delete_topic_handle[:error_string] = delete_topic_results[0].error_string delete_topic_handle[:result_name] = delete_topic_results[0].result_name - - delete_topic_handle.unlock - end - end - - def self.process_create_partitions(event_ptr) - create_partitionss_result = Rdkafka::Bindings.rd_kafka_event_CreatePartitions_result(event_ptr) - - # Get the number of create topic results - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - create_partitions_result_array = Rdkafka::Bindings.rd_kafka_CreatePartitions_result_topics(create_partitionss_result, pointer_to_size_t) - create_partitions_results = TopicResult.create_topic_results_from_array(pointer_to_size_t.read_int, create_partitions_result_array) - create_partitions_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if create_partitions_handle = Rdkafka::Admin::CreatePartitionsHandle.remove(create_partitions_handle_ptr.address) - create_partitions_handle[:response] = create_partitions_results[0].result_error - create_partitions_handle[:error_string] = create_partitions_results[0].error_string - create_partitions_handle[:result_name] = create_partitions_results[0].result_name - - create_partitions_handle.unlock - end - end - - def self.process_create_acl(event_ptr) - create_acls_result = Rdkafka::Bindings.rd_kafka_event_CreateAcls_result(event_ptr) - - # Get the number of acl results - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - create_acl_result_array = Rdkafka::Bindings.rd_kafka_CreateAcls_result_acls(create_acls_result, pointer_to_size_t) - create_acl_results = CreateAclResult.create_acl_results_from_array(pointer_to_size_t.read_int, create_acl_result_array) - create_acl_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if create_acl_handle = Rdkafka::Admin::CreateAclHandle.remove(create_acl_handle_ptr.address) - create_acl_handle[:response] = create_acl_results[0].result_error - create_acl_handle[:response_string] = create_acl_results[0].error_string - - create_acl_handle.unlock - end - end - - def self.process_delete_acl(event_ptr) - delete_acls_result = Rdkafka::Bindings.rd_kafka_event_DeleteAcls_result(event_ptr) - - # Get the number of acl results - pointer_to_size_t = FFI::MemoryPointer.new(:int32) - delete_acl_result_responses = Rdkafka::Bindings.rd_kafka_DeleteAcls_result_responses(delete_acls_result, pointer_to_size_t) - delete_acl_results = DeleteAclResult.delete_acl_results_from_array(pointer_to_size_t.read_int, delete_acl_result_responses) - delete_acl_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if delete_acl_handle = Rdkafka::Admin::DeleteAclHandle.remove(delete_acl_handle_ptr.address) - delete_acl_handle[:response] = delete_acl_results[0].result_error - delete_acl_handle[:response_string] = delete_acl_results[0].error_string - - if delete_acl_results[0].result_error == 0 - delete_acl_handle[:matching_acls] = delete_acl_results[0].matching_acls - delete_acl_handle[:matching_acls_count] = delete_acl_results[0].matching_acls_count - end - - delete_acl_handle.unlock - end - end - - def self.process_describe_acl(event_ptr) - describe_acl = DescribeAclResult.new(event_ptr) - describe_acl_handle_ptr = Rdkafka::Bindings.rd_kafka_event_opaque(event_ptr) - - if describe_acl_handle = Rdkafka::Admin::DescribeAclHandle.remove(describe_acl_handle_ptr.address) - describe_acl_handle[:response] = describe_acl.result_error - describe_acl_handle[:response_string] = describe_acl.error_string - - if describe_acl.result_error == 0 - describe_acl_handle[:acls] = describe_acl.matching_acls - describe_acl_handle[:acls_count] = describe_acl.matching_acls_count - end - - describe_acl_handle.unlock + delete_topic_handle[:pending] = false end end end # FFI Function used for Message Delivery callbacks + DeliveryCallbackFunction = FFI::Function.new( :void, [:pointer, :pointer, :pointer] ) do |client_ptr, message_ptr, opaque_ptr| @@ -361,29 +90,15 @@ def self.call(_, message_ptr, opaque_ptr) message = Rdkafka::Bindings::Message.new(message_ptr) delivery_handle_ptr_address = message[:_private].address if delivery_handle = Rdkafka::Producer::DeliveryHandle.remove(delivery_handle_ptr_address) - topic_name = Rdkafka::Bindings.rd_kafka_topic_name(message[:rkt]) - # Update delivery handle delivery_handle[:response] = message[:err] delivery_handle[:partition] = message[:partition] delivery_handle[:offset] = message[:offset] - delivery_handle[:topic_name] = FFI::MemoryPointer.from_string(topic_name) - + delivery_handle[:pending] = false # Call delivery callback on opaque if opaque = Rdkafka::Config.opaques[opaque_ptr.to_i] - opaque.call_delivery_callback( - Rdkafka::Producer::DeliveryReport.new( - message[:partition], - message[:offset], - topic_name, - message[:err], - delivery_handle.label - ), - delivery_handle - ) + opaque.call_delivery_callback(Rdkafka::Producer::DeliveryReport.new(message[:partition], message[:offset], message[:err])) end - - delivery_handle.unlock end end end diff --git a/lib/rdkafka/config.rb b/lib/rdkafka/config.rb index 11120f2f..b6c87990 100644 --- a/lib/rdkafka/config.rb +++ b/lib/rdkafka/config.rb @@ -1,9 +1,9 @@ -# frozen_string_literal: true +require "logger" module Rdkafka # Configuration for a Kafka consumer or producer. You can create an instance and use # the consumer and producer methods to create a client. Documentation of the available - # configuration options is available on https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md. + # configuration options is available on https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md. class Config # @private @@logger = Logger.new(STDOUT) @@ -12,16 +12,16 @@ class Config # @private @@error_callback = nil # @private - @@opaques = ObjectSpace::WeakMap.new + @@opaques = {} # @private @@log_queue = Queue.new - # We memoize thread on the first log flush - # This allows us also to restart logger thread on forks - @@log_thread = nil - # @private - @@log_mutex = Mutex.new - # @private - @@oauthbearer_token_refresh_callback = nil + + Thread.start do + loop do + severity, msg = @@log_queue.pop + @@logger.add(severity, msg) + end + end # Returns the current logger, by default this is a logger to stdout. # @@ -30,23 +30,6 @@ def self.logger @@logger end - # Makes sure that there is a thread for consuming logs - # We do not spawn thread immediately and we need to check if it operates to support forking - def self.ensure_log_thread - return if @@log_thread && @@log_thread.alive? - - @@log_mutex.synchronize do - # Restart if dead (fork, crash) - @@log_thread = nil if @@log_thread && !@@log_thread.alive? - - @@log_thread ||= Thread.start do - loop do - severity, msg = @@log_queue.pop - @@logger.add(severity, msg) - end - end - end - end # Returns a queue whose contents will be passed to the configured logger. Each entry # should follow the format [Logger::Severity, String]. The benefit over calling the @@ -64,18 +47,18 @@ def self.log_queue # @return [nil] def self.logger=(logger) raise NoLoggerError if logger.nil? - @@logger = logger + @@logger=logger end # Set a callback that will be called every time the underlying client emits statistics. # You can configure if and how often this happens using `statistics.interval.ms`. - # The callback is called with a hash that's documented here: https://github.com/confluentinc/librdkafka/blob/master/STATISTICS.md + # The callback is called with a hash that's documented here: https://github.com/edenhill/librdkafka/blob/master/STATISTICS.md # # @param callback [Proc, #call] The callback # # @return [nil] def self.statistics_callback=(callback) - raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) || callback == nil + raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) @@statistics_callback = callback end @@ -105,24 +88,6 @@ def self.error_callback @@error_callback end - # Sets the SASL/OAUTHBEARER token refresh callback. - # This callback will be triggered when it is time to refresh the client's OAUTHBEARER token - # - # @param callback [Proc, #call] The callback - # - # @return [nil] - def self.oauthbearer_token_refresh_callback=(callback) - raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) || callback == nil - @@oauthbearer_token_refresh_callback = callback - end - - # Returns the current oauthbearer_token_refresh_callback callback, by default this is nil. - # - # @return [Proc, nil] - def self.oauthbearer_token_refresh_callback - @@oauthbearer_token_refresh_callback - end - # @private def self.opaques @@opaques @@ -148,7 +113,6 @@ def self.opaques def initialize(config_hash = {}) @config_hash = DEFAULT_CONFIG.merge(config_hash) @consumer_rebalance_listener = nil - @consumer_poll_set = true end # Set a config option. @@ -177,31 +141,13 @@ def consumer_rebalance_listener=(listener) @consumer_rebalance_listener = listener end - # Should we use a single queue for the underlying consumer and events. - # - # This is an advanced API that allows for more granular control of the polling process. - # When this value is set to `false` (`true` by defualt), there will be two queues that need to - # be polled: - # - main librdkafka queue for events - # - consumer queue with messages and rebalances - # - # It is recommended to use the defaults and only set it to `false` in advance multi-threaded - # and complex cases where granular events handling control is needed. - # - # @param poll_set [Boolean] - def consumer_poll_set=(poll_set) - @consumer_poll_set = poll_set - end - - # Creates a consumer with this configuration. - # - # @param native_kafka_auto_start [Boolean] should the native kafka operations be started - # automatically. Defaults to true. Set to false only when doing complex initialization. - # @return [Consumer] The created consumer + # Create a consumer with this configuration. # # @raise [ConfigError] When the configuration contains invalid options # @raise [ClientCreationError] When the native client cannot be created - def consumer(native_kafka_auto_start: true) + # + # @return [Consumer] The created consumer + def consumer opaque = Opaque.new config = native_config(opaque) @@ -210,32 +156,22 @@ def consumer(native_kafka_auto_start: true) Rdkafka::Bindings.rd_kafka_conf_set_rebalance_cb(config, Rdkafka::Bindings::RebalanceCallback) end - # Create native client kafka = native_kafka(config, :rd_kafka_consumer) - # Redirect the main queue to the consumer queue - Rdkafka::Bindings.rd_kafka_poll_set_consumer(kafka) if @consumer_poll_set + # Redirect the main queue to the consumer + Rdkafka::Bindings.rd_kafka_poll_set_consumer(kafka) # Return consumer with Kafka client - Rdkafka::Consumer.new( - Rdkafka::NativeKafka.new( - kafka, - run_polling_thread: false, - opaque: opaque, - auto_start: native_kafka_auto_start - ) - ) + Rdkafka::Consumer.new(kafka) end # Create a producer with this configuration. # - # @param native_kafka_auto_start [Boolean] should the native kafka operations be started - # automatically. Defaults to true. Set to false only when doing complex initialization. - # @return [Producer] The created producer - # # @raise [ConfigError] When the configuration contains invalid options # @raise [ClientCreationError] When the native client cannot be created - def producer(native_kafka_auto_start: true) + # + # @return [Producer] The created producer + def producer # Create opaque opaque = Opaque.new # Create Kafka config @@ -243,46 +179,22 @@ def producer(native_kafka_auto_start: true) # Set callback to receive delivery reports on config Rdkafka::Bindings.rd_kafka_conf_set_dr_msg_cb(config, Rdkafka::Callbacks::DeliveryCallbackFunction) # Return producer with Kafka client - partitioner_name = self[:partitioner] || self["partitioner"] - - kafka = native_kafka(config, :rd_kafka_producer) - - Rdkafka::Producer.new( - Rdkafka::NativeKafka.new( - kafka, - run_polling_thread: true, - opaque: opaque, - auto_start: native_kafka_auto_start - ), - partitioner_name - ).tap do |producer| + Rdkafka::Producer.new(native_kafka(config, :rd_kafka_producer)).tap do |producer| opaque.producer = producer end end - # Creates an admin instance with this configuration. - # - # @param native_kafka_auto_start [Boolean] should the native kafka operations be started - # automatically. Defaults to true. Set to false only when doing complex initialization. - # @return [Admin] The created admin instance + # Create an admin instance with this configuration. # # @raise [ConfigError] When the configuration contains invalid options # @raise [ClientCreationError] When the native client cannot be created - def admin(native_kafka_auto_start: true) + # + # @return [Admin] The created admin instance + def admin opaque = Opaque.new config = native_config(opaque) Rdkafka::Bindings.rd_kafka_conf_set_background_event_cb(config, Rdkafka::Callbacks::BackgroundEventCallbackFunction) - - kafka = native_kafka(config, :rd_kafka_producer) - - Rdkafka::Admin.new( - Rdkafka::NativeKafka.new( - kafka, - run_polling_thread: true, - opaque: opaque, - auto_start: native_kafka_auto_start - ) - ) + Rdkafka::Admin.new(native_kafka(config, :rd_kafka_producer)) end # Error that is returned by the underlying rdkafka error if an invalid configuration option is present. @@ -298,7 +210,7 @@ class NoLoggerError < RuntimeError; end # This method is only intended to be used to create a client, # using it in another way will leak memory. - def native_config(opaque = nil) + def native_config(opaque=nil) Rdkafka::Bindings.rd_kafka_conf_new.tap do |config| # Create config @config_hash.merge(REQUIRED_CONFIG).each do |key, value| @@ -334,9 +246,6 @@ def native_config(opaque = nil) # Set error callback Rdkafka::Bindings.rd_kafka_conf_set_error_cb(config, Rdkafka::Bindings::ErrorCallback) - - # Set oauth callback - Rdkafka::Bindings.rd_kafka_conf_set_oauthbearer_token_refresh_cb(config, Rdkafka::Bindings::OAuthbearerTokenRefreshCallback) end end @@ -369,22 +278,22 @@ class Opaque attr_accessor :producer attr_accessor :consumer_rebalance_listener - def call_delivery_callback(delivery_report, delivery_handle) - producer.call_delivery_callback(delivery_report, delivery_handle) if producer + def call_delivery_callback(delivery_handle) + producer.call_delivery_callback(delivery_handle) if producer end - def call_on_partitions_assigned(list) + def call_on_partitions_assigned(consumer, list) return unless consumer_rebalance_listener return unless consumer_rebalance_listener.respond_to?(:on_partitions_assigned) - consumer_rebalance_listener.on_partitions_assigned(list) + consumer_rebalance_listener.on_partitions_assigned(consumer, list) end - def call_on_partitions_revoked(list) + def call_on_partitions_revoked(consumer, list) return unless consumer_rebalance_listener return unless consumer_rebalance_listener.respond_to?(:on_partitions_revoked) - consumer_rebalance_listener.on_partitions_revoked(list) + consumer_rebalance_listener.on_partitions_revoked(consumer, list) end end end diff --git a/lib/rdkafka/consumer.rb b/lib/rdkafka/consumer.rb index 8c62c348..fccc57cf 100644 --- a/lib/rdkafka/consumer.rb +++ b/lib/rdkafka/consumer.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka # A consumer of Kafka messages. It uses the high-level consumer approach where the Kafka # brokers automatically assign partitions and load balance partitions over consumers that @@ -12,54 +10,31 @@ module Rdkafka # `each_slice` to consume batches of messages. class Consumer include Enumerable - include Helpers::Time - include Helpers::OAuth # @private def initialize(native_kafka) @native_kafka = native_kafka - end - - # Starts the native Kafka polling thread and kicks off the init polling - # @note Not needed to run unless explicit start was disabled - def start - @native_kafka.start - end - - # @return [String] consumer name - def name - @name ||= @native_kafka.with_inner do |inner| - ::Rdkafka::Bindings.rd_kafka_name(inner) - end - end - - def finalizer - ->(_) { close } + @closing = false end # Close this consumer # @return [nil] def close - return if closed? - ObjectSpace.undefine_finalizer(self) - - @native_kafka.synchronize do |inner| - Rdkafka::Bindings.rd_kafka_consumer_close(inner) - end - - @native_kafka.close - end + return unless @native_kafka - # Whether this consumer has closed - def closed? - @native_kafka.closed? + @closing = true + Rdkafka::Bindings.rd_kafka_consumer_close(@native_kafka) + Rdkafka::Bindings.rd_kafka_destroy(@native_kafka) + @native_kafka = nil end - # Subscribes to one or more topics letting Kafka handle partition assignments. + # Subscribe to one or more topics letting Kafka handle partition assignments. # # @param topics [Array] One or more topic names - # @return [nil] + # # @raise [RdkafkaError] When subscribing fails + # + # @return [nil] def subscribe(*topics) closed_consumer_check(__method__) @@ -71,9 +46,7 @@ def subscribe(*topics) end # Subscribe to topic partition list and check this was successful - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_subscribe(inner, tpl) - end + response = Rdkafka::Bindings.rd_kafka_subscribe(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error subscribing to '#{topics.join(', ')}'") end @@ -83,14 +56,13 @@ def subscribe(*topics) # Unsubscribe from all subscribed topics. # - # @return [nil] # @raise [RdkafkaError] When unsubscribing fails + # + # @return [nil] def unsubscribe closed_consumer_check(__method__) - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_unsubscribe(inner) - end + response = Rdkafka::Bindings.rd_kafka_unsubscribe(@native_kafka) if response != 0 raise Rdkafka::RdkafkaError.new(response) end @@ -99,8 +71,10 @@ def unsubscribe # Pause producing or consumption for the provided list of partitions # # @param list [TopicPartitionList] The topic with partitions to pause - # @return [nil] + # # @raise [RdkafkaTopicPartitionListError] When pausing subscription fails. + # + # @return [nil] def pause(list) closed_consumer_check(__method__) @@ -111,9 +85,7 @@ def pause(list) tpl = list.to_native_tpl begin - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_pause_partitions(inner, tpl) - end + response = Rdkafka::Bindings.rd_kafka_pause_partitions(@native_kafka, tpl) if response != 0 list = TopicPartitionList.from_native_tpl(tpl) @@ -124,11 +96,13 @@ def pause(list) end end - # Resumes producing consumption for the provided list of partitions + # Resume producing consumption for the provided list of partitions # # @param list [TopicPartitionList] The topic with partitions to pause - # @return [nil] + # # @raise [RdkafkaError] When resume subscription fails. + # + # @return [nil] def resume(list) closed_consumer_check(__method__) @@ -139,9 +113,7 @@ def resume(list) tpl = list.to_native_tpl begin - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_resume_partitions(inner, tpl) - end + response = Rdkafka::Bindings.rd_kafka_resume_partitions(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error resume '#{list.to_h}'") end @@ -150,17 +122,16 @@ def resume(list) end end - # Returns the current subscription to topics and partitions + # Return the current subscription to topics and partitions # - # @return [TopicPartitionList] # @raise [RdkafkaError] When getting the subscription fails. + # + # @return [TopicPartitionList] def subscription closed_consumer_check(__method__) ptr = FFI::MemoryPointer.new(:pointer) - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_subscription(inner, ptr) - end + response = Rdkafka::Bindings.rd_kafka_subscription(@native_kafka, ptr) if response != 0 raise Rdkafka::RdkafkaError.new(response) @@ -178,6 +149,7 @@ def subscription # Atomic assignment of partitions to consume # # @param list [TopicPartitionList] The topic with partitions to assign + # # @raise [RdkafkaError] When assigning fails def assign(list) closed_consumer_check(__method__) @@ -189,9 +161,7 @@ def assign(list) tpl = list.to_native_tpl begin - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_assign(inner, tpl) - end + response = Rdkafka::Bindings.rd_kafka_assign(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error assigning '#{list.to_h}'") end @@ -202,15 +172,14 @@ def assign(list) # Returns the current partition assignment. # - # @return [TopicPartitionList] # @raise [RdkafkaError] When getting the assignment fails. + # + # @return [TopicPartitionList] def assignment closed_consumer_check(__method__) ptr = FFI::MemoryPointer.new(:pointer) - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_assignment(inner, ptr) - end + response = Rdkafka::Bindings.rd_kafka_assignment(@native_kafka, ptr) if response != 0 raise Rdkafka::RdkafkaError.new(response) end @@ -228,25 +197,16 @@ def assignment ptr.free unless ptr.nil? end - # @return [Boolean] true if our current assignment has been lost involuntarily. - def assignment_lost? - closed_consumer_check(__method__) - - @native_kafka.with_inner do |inner| - !Rdkafka::Bindings.rd_kafka_assignment_lost(inner).zero? - end - end - # Return the current committed offset per partition for this consumer group. - # The offset field of each requested partition will either be set to stored offset or to -1001 - # in case there was no stored offset for that partition. + # The offset field of each requested partition will either be set to stored offset or to -1001 in case there was no stored offset for that partition. # - # @param list [TopicPartitionList, nil] The topic with partitions to get the offsets for or nil - # to use the current subscription. + # @param list [TopicPartitionList, nil] The topic with partitions to get the offsets for or nil to use the current subscription. # @param timeout_ms [Integer] The timeout for fetching this information. - # @return [TopicPartitionList] + # # @raise [RdkafkaError] When getting the committed positions fails. - def committed(list=nil, timeout_ms=2000) + # + # @return [TopicPartitionList] + def committed(list=nil, timeout_ms=1200) closed_consumer_check(__method__) if list.nil? @@ -258,9 +218,7 @@ def committed(list=nil, timeout_ms=2000) tpl = list.to_native_tpl begin - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_committed(inner, tpl, timeout_ms) - end + response = Rdkafka::Bindings.rd_kafka_committed(@native_kafka, tpl, timeout_ms) if response != 0 raise Rdkafka::RdkafkaError.new(response) end @@ -270,57 +228,29 @@ def committed(list=nil, timeout_ms=2000) end end - # Return the current positions (offsets) for topics and partitions. - # The offset field of each requested partition will be set to the offset of the last consumed message + 1, or nil in case there was no previous message. - # - # @param list [TopicPartitionList, nil] The topic with partitions to get the offsets for or nil to use the current subscription. - # - # @return [TopicPartitionList] - # - # @raise [RdkafkaError] When getting the positions fails. - def position(list=nil) - if list.nil? - list = assignment - elsif !list.is_a?(TopicPartitionList) - raise TypeError.new("list has to be nil or a TopicPartitionList") - end - - tpl = list.to_native_tpl - - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_position(inner, tpl) - end - - if response != 0 - raise Rdkafka::RdkafkaError.new(response) - end - - TopicPartitionList.from_native_tpl(tpl) - end - # Query broker for low (oldest/beginning) and high (newest/end) offsets for a partition. # # @param topic [String] The topic to query # @param partition [Integer] The partition to query # @param timeout_ms [Integer] The timeout for querying the broker - # @return [Integer] The low and high watermark + # # @raise [RdkafkaError] When querying the broker fails. - def query_watermark_offsets(topic, partition, timeout_ms=1000) + # + # @return [Integer] The low and high watermark + def query_watermark_offsets(topic, partition, timeout_ms=200) closed_consumer_check(__method__) low = FFI::MemoryPointer.new(:int64, 1) high = FFI::MemoryPointer.new(:int64, 1) - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_query_watermark_offsets( - inner, - topic, - partition, - low, - high, - timeout_ms, - ) - end + response = Rdkafka::Bindings.rd_kafka_query_watermark_offsets( + @native_kafka, + topic, + partition, + low, + high, + timeout_ms, + ) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error querying watermark offsets for partition #{partition} of #{topic}") end @@ -338,10 +268,11 @@ def query_watermark_offsets(topic, partition, timeout_ms=1000) # # @param topic_partition_list [TopicPartitionList] The list to calculate lag for. # @param watermark_timeout_ms [Integer] The timeout for each query watermark call. - # @return [Hash>] A hash containing all topics with the lag - # per partition + # # @raise [RdkafkaError] When querying the broker fails. - def lag(topic_partition_list, watermark_timeout_ms=1000) + # + # @return [Hash>] A hash containing all topics with the lag per partition + def lag(topic_partition_list, watermark_timeout_ms=100) out = {} topic_partition_list.to_h.each do |topic, partitions| @@ -367,9 +298,7 @@ def lag(topic_partition_list, watermark_timeout_ms=1000) # @return [String, nil] def cluster_id closed_consumer_check(__method__) - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_clusterid(inner) - end + Rdkafka::Bindings.rd_kafka_clusterid(@native_kafka) end # Returns this client's broker-assigned group member id @@ -379,9 +308,7 @@ def cluster_id # @return [String, nil] def member_id closed_consumer_check(__method__) - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_memberid(inner) - end + Rdkafka::Bindings.rd_kafka_memberid(@native_kafka) end # Store offset of a message to be used in the next commit of this consumer @@ -389,68 +316,56 @@ def member_id # When using this `enable.auto.offset.store` should be set to `false` in the config. # # @param message [Rdkafka::Consumer::Message] The message which offset will be stored - # @return [nil] + # # @raise [RdkafkaError] When storing the offset fails + # + # @return [nil] def store_offset(message) closed_consumer_check(__method__) - list = TopicPartitionList.new - list.add_topic_and_partitions_with_offsets( + # rd_kafka_offset_store is one of the few calls that does not support + # a string as the topic, so create a native topic for it. + native_topic = Rdkafka::Bindings.rd_kafka_topic_new( + @native_kafka, message.topic, - message.partition => message.offset + 1 + nil + ) + response = Rdkafka::Bindings.rd_kafka_offset_store( + native_topic, + message.partition, + message.offset ) - - tpl = list.to_native_tpl - - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_offsets_store( - inner, - tpl - ) - end - if response != 0 raise Rdkafka::RdkafkaError.new(response) end ensure - Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl + if native_topic && !native_topic.null? + Rdkafka::Bindings.rd_kafka_topic_destroy(native_topic) + end end # Seek to a particular message. The next poll on the topic/partition will return the # message at the given offset. # # @param message [Rdkafka::Consumer::Message] The message to which to seek - # @return [nil] + # # @raise [RdkafkaError] When seeking fails - def seek(message) - seek_by(message.topic, message.partition, message.offset) - end - - # Seek to a particular message by providing the topic, partition and offset. - # The next poll on the topic/partition will return the - # message at the given offset. # - # @param topic [String] The topic in which to seek - # @param partition [Integer] The partition number to seek - # @param offset [Integer] The partition offset to seek # @return [nil] - # @raise [RdkafkaError] When seeking fails - def seek_by(topic, partition, offset) + def seek(message) closed_consumer_check(__method__) # rd_kafka_offset_store is one of the few calls that does not support # a string as the topic, so create a native topic for it. - native_topic = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_topic_new( - inner, - topic, - nil - ) - end + native_topic = Rdkafka::Bindings.rd_kafka_topic_new( + @native_kafka, + message.topic, + nil + ) response = Rdkafka::Bindings.rd_kafka_seek( native_topic, - partition, - offset, + message.partition, + message.offset, 0 # timeout ) if response != 0 @@ -462,39 +377,6 @@ def seek_by(topic, partition, offset) end end - # Lookup offset for the given partitions by timestamp. - # - # @param list [TopicPartitionList] The TopicPartitionList with timestamps instead of offsets - # - # @return [TopicPartitionList] - # - # @raise [RdKafkaError] When the OffsetForTimes lookup fails - def offsets_for_times(list, timeout_ms = 1000) - closed_consumer_check(__method__) - - if !list.is_a?(TopicPartitionList) - raise TypeError.new("list has to be a TopicPartitionList") - end - - tpl = list.to_native_tpl - - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_offsets_for_times( - inner, - tpl, - timeout_ms # timeout - ) - end - - if response != 0 - raise Rdkafka::RdkafkaError.new(response) - end - - TopicPartitionList.from_native_tpl(tpl) - ensure - Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl - end - # Manually commit the current offsets of this consumer. # # To use this set `enable.auto.commit`to `false` to disable automatic triggering @@ -506,8 +388,10 @@ def offsets_for_times(list, timeout_ms = 1000) # # @param list [TopicPartitionList,nil] The topic with partitions to commit # @param async [Boolean] Whether to commit async or wait for the commit to finish - # @return [nil] + # # @raise [RdkafkaError] When committing fails + # + # @return [nil] def commit(list=nil, async=false) closed_consumer_check(__method__) @@ -518,9 +402,7 @@ def commit(list=nil, async=false) tpl = list ? list.to_native_tpl : nil begin - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_commit(inner, tpl, async) - end + response = Rdkafka::Bindings.rd_kafka_commit(@native_kafka, tpl, async) if response != 0 raise Rdkafka::RdkafkaError.new(response) end @@ -532,14 +414,14 @@ def commit(list=nil, async=false) # Poll for the next message on one of the subscribed topics # # @param timeout_ms [Integer] Timeout of this poll - # @return [Message, nil] A message or nil if there was no new message within the timeout + # # @raise [RdkafkaError] When polling fails + # + # @return [Message, nil] A message or nil if there was no new message within the timeout def poll(timeout_ms) closed_consumer_check(__method__) - message_ptr = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_consumer_poll(inner, timeout_ms) - end + message_ptr = Rdkafka::Bindings.rd_kafka_consumer_poll(@native_kafka, timeout_ms) if message_ptr.null? nil else @@ -554,53 +436,30 @@ def poll(timeout_ms) end ensure # Clean up rdkafka message if there is one - if message_ptr && !message_ptr.null? + if !message_ptr.nil? && !message_ptr.null? Rdkafka::Bindings.rd_kafka_message_destroy(message_ptr) end end - # Polls the main rdkafka queue (not the consumer one). Do **NOT** use it if `consumer_poll_set` - # was set to `true`. - # - # Events will cause application-provided callbacks to be called. - # - # Events (in the context of the consumer): - # - error callbacks - # - stats callbacks - # - any other callbacks supported by librdkafka that are not part of the consumer_poll, that - # would have a callback configured and activated. - # - # This method needs to be called at regular intervals to serve any queued callbacks waiting to - # be called. When in use, does **NOT** replace `#poll` but needs to run complementary with it. - # - # @param timeout_ms [Integer] poll timeout. If set to 0 will run async, when set to -1 will - # block until any events available. - # - # @note This method technically should be called `#poll` and the current `#poll` should be - # called `#consumer_poll` though we keep the current naming convention to make it backward - # compatible. - def events_poll(timeout_ms = 0) - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_poll(inner, timeout_ms) - end - end - # Poll for new messages and yield for each received one. Iteration # will end when the consumer is closed. # - # If `enable.partition.eof` is turned on in the config this will raise an error when an eof is - # reached, so you probably want to disable that when using this method of iteration. + # If `enable.partition.eof` is turned on in the config this will raise an + # error when an eof is reached, so you probably want to disable that when + # using this method of iteration. + # + # @raise [RdkafkaError] When polling fails # # @yieldparam message [Message] Received message + # # @return [nil] - # @raise [RdkafkaError] When polling fails def each loop do message = poll(250) if message yield(message) else - if closed? + if @closing break else next @@ -609,6 +468,10 @@ def each end end + def closed_consumer_check(method) + raise Rdkafka::ClosedConsumerError.new(method) if @native_kafka.nil? + end + # Poll for new messages and yield them in batches that may contain # messages from more than one partition. # @@ -635,35 +498,36 @@ def each # Exception behavior is more complicated than with `each`, in that if # :yield_on_error is true, and an exception is raised during the # poll, and messages have already been received, they will be yielded to - # the caller before the exception is allowed to propagate. + # the caller before the exception is allowed to propogate. # # If you are setting either auto.commit or auto.offset.store to false in # the consumer configuration, then you should let yield_on_error keep its - # default value of false because you are guaranteed to see these messages + # default value of false because you are gauranteed to see these messages # again. However, if both auto.commit and auto.offset.store are set to # true, you should set yield_on_error to true so you can process messages # that you may or may not see again. # # @param max_items [Integer] Maximum size of the yielded array of messages + # # @param bytes_threshold [Integer] Threshold number of total message bytes in the yielded array of messages + # # @param timeout_ms [Integer] max time to wait for up to max_items # - # @yieldparam messages [Array] An array of received Message - # @yieldparam pending_exception [Exception] normally nil, or an exception + # @raise [RdkafkaError] When polling fails # # @yield [messages, pending_exception] - # which will be propagated after processing of the partial batch is complete. + # @yieldparam messages [Array] An array of received Message + # @yieldparam pending_exception [Exception] normally nil, or an exception + # which will be propogated after processing of the partial batch is complete. # # @return [nil] - # - # @raise [RdkafkaError] When polling fails def each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250, yield_on_error: false, &block) closed_consumer_check(__method__) slice = [] bytes = 0 end_time = monotonic_now + timeout_ms / 1000.0 loop do - break if closed? + break if @closing max_wait = end_time - monotonic_now max_wait_ms = if max_wait <= 0 0 # should not block, but may retrieve a message @@ -681,7 +545,7 @@ def each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250 end if message slice << message - bytes += message.payload.bytesize if message.payload + bytes += message.payload.bytesize end if slice.size == max_items || bytes >= bytes_threshold || monotonic_now >= end_time - 0.001 yield slice.dup, nil @@ -692,26 +556,10 @@ def each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250 end end - # Returns pointer to the consumer group metadata. It is used only in the context of - # exactly-once-semantics in transactions, this is why it is never remapped to Ruby - # - # This API is **not** usable by itself from Ruby - # - # @note This pointer **needs** to be removed with `#rd_kafka_consumer_group_metadata_destroy` - # - # @private - def consumer_group_metadata_pointer - closed_consumer_check(__method__) - - @native_kafka.with_inner do |inner| - Bindings.rd_kafka_consumer_group_metadata(inner) - end - end - private - - def closed_consumer_check(method) - raise Rdkafka::ClosedConsumerError.new(method) if closed? + def monotonic_now + # needed because Time.now can go backwards + Process.clock_gettime(Process::CLOCK_MONOTONIC) end end end diff --git a/lib/rdkafka/consumer/headers.rb b/lib/rdkafka/consumer/headers.rb index e0af326d..864447e5 100644 --- a/lib/rdkafka/consumer/headers.rb +++ b/lib/rdkafka/consumer/headers.rb @@ -1,24 +1,20 @@ -# frozen_string_literal: true - module Rdkafka class Consumer - # Interface to return headers for a consumer message - module Headers - EMPTY_HEADERS = {}.freeze - - # Reads a librdkafka native message's headers and returns them as a Ruby Hash + # A message headers + class Headers + # Reads a native kafka's message header into ruby's hash # - # @private + # @return [Hash] a message headers # - # @param [Rdkafka::Bindings::Message] native_message - # @return [Hash] headers Hash for the native_message # @raise [Rdkafka::RdkafkaError] when fail to read headers + # + # @private def self.from_native(native_message) headers_ptrptr = FFI::MemoryPointer.new(:pointer) err = Rdkafka::Bindings.rd_kafka_message_headers(native_message, headers_ptrptr) if err == Rdkafka::Bindings::RD_KAFKA_RESP_ERR__NOENT - return EMPTY_HEADERS + return {} elsif err != Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR raise Rdkafka::RdkafkaError.new(err, "Error reading message headers") end @@ -28,7 +24,6 @@ def self.from_native(native_message) name_ptrptr = FFI::MemoryPointer.new(:pointer) value_ptrptr = FFI::MemoryPointer.new(:pointer) size_ptr = Rdkafka::Bindings::SizePtr.new - headers = {} idx = 0 @@ -56,12 +51,12 @@ def self.from_native(native_message) value = value_ptr.read_string(size) - headers[name] = value + headers[name.to_sym] = value idx += 1 end - headers.freeze + headers end end end diff --git a/lib/rdkafka/consumer/message.rb b/lib/rdkafka/consumer/message.rb index 8d00d29c..37e24e5c 100644 --- a/lib/rdkafka/consumer/message.rb +++ b/lib/rdkafka/consumer/message.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Consumer # A message that was consumed from a topic. @@ -20,7 +18,7 @@ class Message # @return [String, nil] attr_reader :key - # This message's offset in its partition + # This message's offset in it's partition # @return [Integer] attr_reader :offset diff --git a/lib/rdkafka/consumer/partition.rb b/lib/rdkafka/consumer/partition.rb index 8e143486..52a9c436 100644 --- a/lib/rdkafka/consumer/partition.rb +++ b/lib/rdkafka/consumer/partition.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Consumer # Information about a partition, used in {TopicPartitionList}. diff --git a/lib/rdkafka/consumer/topic_partition_list.rb b/lib/rdkafka/consumer/topic_partition_list.rb index a5e7ddc0..a20b0b5d 100644 --- a/lib/rdkafka/consumer/topic_partition_list.rb +++ b/lib/rdkafka/consumer/topic_partition_list.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Consumer # A list of topics with their partition information @@ -36,11 +34,6 @@ def empty? # Add a topic with optionally partitions to the list. # Calling this method multiple times for the same topic will overwrite the previous configuraton. # - # @param topic [String] The topic's name - # @param partitions [Array, Range, Integer] The topic's partitions or partition count - # - # @return [nil] - # # @example Add a topic with unassigned partitions # tpl.add_topic("topic") # @@ -50,6 +43,10 @@ def empty? # @example Add a topic with all topics up to a count # tpl.add_topic("topic", 9) # + # @param topic [String] The topic's name + # @param partitions [Array, Range, Integer] The topic's partitions or partition count + # + # @return [nil] def add_topic(topic, partitions=nil) if partitions.nil? @data[topic.to_s] = nil @@ -91,11 +88,11 @@ def ==(other) # Create a new topic partition list based of a native one. # - # @private - # # @param pointer [FFI::Pointer] Optional pointer to an existing native list. Its contents will be copied. # # @return [TopicPartitionList] + # + # @private def self.from_native_tpl(pointer) # Data to be moved into the tpl data = {} @@ -128,8 +125,8 @@ def self.from_native_tpl(pointer) # # The pointer will be cleaned by `rd_kafka_topic_partition_list_destroy` when GC releases it. # - # @private # @return [FFI::Pointer] + # @private def to_native_tpl tpl = Rdkafka::Bindings.rd_kafka_topic_partition_list_new(count) @@ -143,13 +140,11 @@ def to_native_tpl ) if p.offset - offset = p.offset.is_a?(Time) ? p.offset.to_f * 1_000 : p.offset - Rdkafka::Bindings.rd_kafka_topic_partition_list_set_offset( tpl, topic, p.partition, - offset + p.offset ) end end diff --git a/lib/rdkafka/error.rb b/lib/rdkafka/error.rb index afae22c7..7ce4ffb6 100644 --- a/lib/rdkafka/error.rb +++ b/lib/rdkafka/error.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka # Base error class. class BaseError < RuntimeError; end @@ -85,17 +83,4 @@ def initialize(method) super("Illegal call to #{method.to_s} on a closed producer") end end - - # Error class for public consumer method calls on a closed admin. - class ClosedAdminError < BaseError - def initialize(method) - super("Illegal call to #{method.to_s} on a closed admin") - end - end - - class ClosedInnerError < BaseError - def initialize - super("Illegal call to a closed inner librdkafka instance") - end - end end diff --git a/lib/rdkafka/helpers/oauth.rb b/lib/rdkafka/helpers/oauth.rb deleted file mode 100644 index 22705319..00000000 --- a/lib/rdkafka/helpers/oauth.rb +++ /dev/null @@ -1,58 +0,0 @@ -module Rdkafka - module Helpers - - module OAuth - - # Set the OAuthBearer token - # - # @param token [String] the mandatory token value to set, often (but not necessarily) a JWS compact serialization as per https://tools.ietf.org/html/rfc7515#section-3.1. - # @param lifetime_ms [Integer] when the token expires, in terms of the number of milliseconds since the epoch. See https://currentmillis.com/. - # @param principal_name [String] the mandatory Kafka principal name associated with the token. - # @param extensions [Hash] optional SASL extensions key-value pairs to be communicated to the broker as additional key-value pairs during the initial client response as per https://tools.ietf.org/html/rfc7628#section-3.1. - # @return [Integer] 0 on success - def oauthbearer_set_token(token:, lifetime_ms:, principal_name:, extensions: nil) - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_oauthbearer_set_token( - inner, token, lifetime_ms, principal_name, - flatten_extensions(extensions), extension_size(extensions), error_buffer, 256 - ) - end - - return response if response.zero? - - oauthbearer_set_token_failure("Failed to set token: #{error_buffer.read_string}") - - response - end - - # Marks failed oauth token acquire in librdkafka - # - # @param reason [String] human readable error reason for failing to acquire token - def oauthbearer_set_token_failure(reason) - @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_oauthbearer_set_token_failure( - inner, - reason - ) - end - end - - private - - # Flatten the extensions hash into a string according to the spec, https://datatracker.ietf.org/doc/html/rfc7628#section-3.1 - def flatten_extensions(extensions) - return nil unless extensions - "\x01#{extensions.map { |e| e.join("=") }.join("\x01")}" - end - - # extension_size is the number of keys + values which should be a non-negative even number - # https://github.com/confluentinc/librdkafka/blob/master/src/rdkafka_sasl_oauthbearer.c#L327-L347 - def extension_size(extensions) - return 0 unless extensions - extensions.size * 2 - end - end - end -end diff --git a/lib/rdkafka/helpers/time.rb b/lib/rdkafka/helpers/time.rb deleted file mode 100644 index 152151b8..00000000 --- a/lib/rdkafka/helpers/time.rb +++ /dev/null @@ -1,14 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - # Namespace for some small utilities used in multiple components - module Helpers - # Time related methods used across Karafka - module Time - # @return [Float] current monotonic time in seconds with microsecond precision - def monotonic_now - ::Process.clock_gettime(::Process::CLOCK_MONOTONIC) - end - end - end -end diff --git a/lib/rdkafka/metadata.rb b/lib/rdkafka/metadata.rb index 10cca3be..576f396d 100644 --- a/lib/rdkafka/metadata.rb +++ b/lib/rdkafka/metadata.rb @@ -1,21 +1,8 @@ -# frozen_string_literal: true - module Rdkafka class Metadata attr_reader :brokers, :topics - # Errors upon which we retry the metadata fetch - RETRIED_ERRORS = %i[ - timed_out - leader_not_available - ].freeze - - private_constant :RETRIED_ERRORS - - def initialize(native_client, topic_name = nil, timeout_ms = 2_000) - attempt ||= 0 - attempt += 1 - + def initialize(native_client, topic_name = nil) native_topic = if topic_name Rdkafka::Bindings.rd_kafka_topic_new(native_client, topic_name, nil) end @@ -27,22 +14,12 @@ def initialize(native_client, topic_name = nil, timeout_ms = 2_000) topic_flag = topic_name.nil? ? 1 : 0 # Retrieve the Metadata - result = Rdkafka::Bindings.rd_kafka_metadata(native_client, topic_flag, native_topic, ptr, timeout_ms) + result = Rdkafka::Bindings.rd_kafka_metadata(native_client, topic_flag, native_topic, ptr, 250) # Error Handling raise Rdkafka::RdkafkaError.new(result) unless result.zero? metadata_from_native(ptr.read_pointer) - rescue ::Rdkafka::RdkafkaError => e - raise unless RETRIED_ERRORS.include?(e.code) - raise if attempt > 10 - - backoff_factor = 2**attempt - timeout = backoff_factor * 0.1 - - sleep(timeout) - - retry ensure Rdkafka::Bindings.rd_kafka_topic_destroy(native_topic) if topic_name Rdkafka::Bindings.rd_kafka_metadata_destroy(ptr.read_pointer) diff --git a/lib/rdkafka/native_kafka.rb b/lib/rdkafka/native_kafka.rb deleted file mode 100644 index 8bf88d4f..00000000 --- a/lib/rdkafka/native_kafka.rb +++ /dev/null @@ -1,133 +0,0 @@ -# frozen_string_literal: true - -module Rdkafka - # @private - # A wrapper around a native kafka that polls and cleanly exits - class NativeKafka - def initialize(inner, run_polling_thread:, opaque:, auto_start: true) - @inner = inner - @opaque = opaque - # Lock around external access - @access_mutex = Mutex.new - # Lock around internal polling - @poll_mutex = Mutex.new - # Lock around decrementing the operations in progress counter - # We have two mutexes - one for increment (`@access_mutex`) and one for decrement mutex - # because they serve different purposes: - # - # - `@access_mutex` allows us to lock the execution and make sure that any operation within - # the `#synchronize` is the only one running and that there are no other running - # operations. - # - `@decrement_mutex` ensures, that our decrement operation is thread-safe for any Ruby - # implementation. - # - # We do not use the same mutex, because it could create a deadlock when an already - # incremented operation cannot decrement because `@access_lock` is now owned by a different - # thread in a synchronized mode and the synchronized mode is waiting on the decrement. - @decrement_mutex = Mutex.new - # counter for operations in progress using inner - @operations_in_progress = 0 - - @run_polling_thread = run_polling_thread - - start if auto_start - - @closing = false - end - - def start - synchronize do - return if @started - - @started = true - - # Trigger initial poll to make sure oauthbearer cb and other initial cb are handled - Rdkafka::Bindings.rd_kafka_poll(@inner, 0) - - if @run_polling_thread - # Start thread to poll client for delivery callbacks, - # not used in consumer. - @polling_thread = Thread.new do - loop do - @poll_mutex.synchronize do - Rdkafka::Bindings.rd_kafka_poll(@inner, 100) - end - - # Exit thread if closing and the poll queue is empty - if Thread.current[:closing] && Rdkafka::Bindings.rd_kafka_outq_len(@inner) == 0 - break - end - end - end - - @polling_thread.name = "rdkafka.native_kafka##{Rdkafka::Bindings.rd_kafka_name(@inner).gsub('rdkafka', '')}" - @polling_thread.abort_on_exception = true - @polling_thread[:closing] = false - end - end - end - - def with_inner - if @access_mutex.owned? - @operations_in_progress += 1 - else - @access_mutex.synchronize { @operations_in_progress += 1 } - end - - @inner.nil? ? raise(ClosedInnerError) : yield(@inner) - ensure - @decrement_mutex.synchronize { @operations_in_progress -= 1 } - end - - def synchronize(&block) - @access_mutex.synchronize do - # Wait for any commands using the inner to finish - # This can take a while on blocking operations like polling but is essential not to proceed - # with certain types of operations like resources destruction as it can cause the process - # to hang or crash - sleep(0.01) until @operations_in_progress.zero? - - with_inner(&block) - end - end - - def finalizer - ->(_) { close } - end - - def closed? - @closing || @inner.nil? - end - - def close(object_id=nil) - return if closed? - - synchronize do - # Indicate to the outside world that we are closing - @closing = true - - if @polling_thread - # Indicate to polling thread that we're closing - @polling_thread[:closing] = true - - # Wait for the polling thread to finish up, - # this can be aborted in practice if this - # code runs from a finalizer. - @polling_thread.join - end - - # Destroy the client after locking both mutexes - @poll_mutex.lock - - # This check prevents a race condition, where we would enter the close in two threads - # and after unlocking the primary one that hold the lock but finished, ours would be unlocked - # and would continue to run, trying to destroy inner twice - return unless @inner - - Rdkafka::Bindings.rd_kafka_destroy(@inner) - @inner = nil - @opaque = nil - end - end - end -end diff --git a/lib/rdkafka/producer.rb b/lib/rdkafka/producer.rb index 2d4cfd7c..f468f0f2 100644 --- a/lib/rdkafka/producer.rb +++ b/lib/rdkafka/producer.rb @@ -1,24 +1,8 @@ -# frozen_string_literal: true +require "securerandom" module Rdkafka # A producer for Kafka messages. To create a producer set up a {Config} and call {Config#producer producer} on that. class Producer - include Helpers::Time - include Helpers::OAuth - - # Cache partitions count for 30 seconds - PARTITIONS_COUNT_TTL = 30 - - # Empty hash used as a default - EMPTY_HASH = {}.freeze - - private_constant :PARTITIONS_COUNT_TTL, :EMPTY_HASH - - # Raised when there was a critical issue when invoking rd_kafka_topic_new - # This is a temporary solution until https://github.com/karafka/rdkafka-ruby/issues/451 is - # resolved and this is normalized in all the places - class TopicHandleCreationError < RuntimeError; end - # @private # Returns the current delivery callback, by default this is nil. # @@ -26,105 +10,29 @@ class TopicHandleCreationError < RuntimeError; end attr_reader :delivery_callback # @private - # Returns the number of arguments accepted by the callback, by default this is nil. - # - # @return [Integer, nil] - attr_reader :delivery_callback_arity - - # @private - # @param native_kafka [NativeKafka] - # @param partitioner_name [String, nil] name of the partitioner we want to use or nil to use - # the "consistent_random" default - def initialize(native_kafka, partitioner_name) - @topics_refs_map = {} - @topics_configs = {} + def initialize(native_kafka) + @id = SecureRandom.uuid + @closing = false @native_kafka = native_kafka - @partitioner_name = partitioner_name || "consistent_random" - # Makes sure, that native kafka gets closed before it gets GCed by Ruby - ObjectSpace.define_finalizer(self, native_kafka.finalizer) + # Makes sure, that the producer gets closed before it gets GCed by Ruby + ObjectSpace.define_finalizer(@id, proc { close }) - @_partitions_count_cache = Hash.new do |cache, topic| - topic_metadata = nil - - @native_kafka.with_inner do |inner| - topic_metadata = ::Rdkafka::Metadata.new(inner, topic).topics&.first + # Start thread to poll client for delivery callbacks + @polling_thread = Thread.new do + loop do + Rdkafka::Bindings.rd_kafka_poll(@native_kafka, 250) + # Exit thread if closing and the poll queue is empty + if @closing && Rdkafka::Bindings.rd_kafka_outq_len(@native_kafka) == 0 + break + end end - - partition_count = topic_metadata ? topic_metadata[:partition_count] : -1 - - # This approach caches the failure to fetch only for 1 second. This will make sure, that - # we do not cache the failure for too long but also "buys" us a bit of time in case there - # would be issues in the cluster so we won't overaload it with consecutive requests - cache[topic] = if partition_count.positive? - [monotonic_now, partition_count] - else - [monotonic_now - PARTITIONS_COUNT_TTL + 5, partition_count] - end - end - end - - # Sets alternative set of configuration details that can be set per topic - # @note It is not allowed to re-set the same topic config twice because of the underlying - # librdkafka caching - # @param topic [String] The topic name - # @param config [Hash] config we want to use per topic basis - # @param config_hash [Integer] hash of the config. We expect it here instead of computing it, - # because it is already computed during the retrieval attempt in the `#produce` flow. - def set_topic_config(topic, config, config_hash) - # Ensure lock on topic reference just in case - @native_kafka.with_inner do |inner| - @topics_refs_map[topic] ||= {} - @topics_configs[topic] ||= {} - - return if @topics_configs[topic].key?(config_hash) - - # If config is empty, we create an empty reference that will be used with defaults - rd_topic_config = if config.empty? - nil - else - Rdkafka::Bindings.rd_kafka_topic_conf_new.tap do |topic_config| - config.each do |key, value| - error_buffer = FFI::MemoryPointer.new(:char, 256) - result = Rdkafka::Bindings.rd_kafka_topic_conf_set( - topic_config, - key.to_s, - value.to_s, - error_buffer, - 256 - ) - - unless result == :config_ok - raise Config::ConfigError.new(error_buffer.read_string) - end - end - end - end - - topic_handle = Bindings.rd_kafka_topic_new(inner, topic, rd_topic_config) - - raise TopicHandleCreationError.new("Error creating topic handle for topic #{topic}") if topic_handle.null? - - @topics_configs[topic][config_hash] = config - @topics_refs_map[topic][config_hash] = topic_handle - end - end - - # Starts the native Kafka polling thread and kicks off the init polling - # @note Not needed to run unless explicit start was disabled - def start - @native_kafka.start - end - - # @return [String] producer name - def name - @name ||= @native_kafka.with_inner do |inner| - ::Rdkafka::Bindings.rd_kafka_name(inner) end + @polling_thread.abort_on_exception = true end # Set a callback that will be called every time a message is successfully produced. - # The callback is called with a {DeliveryReport} and {DeliveryHandle} + # The callback is called with a {DeliveryReport} # # @param callback [Proc, #call] The callback # @@ -132,108 +40,32 @@ def name def delivery_callback=(callback) raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) @delivery_callback = callback - @delivery_callback_arity = arity(callback) end # Close this producer and wait for the internal poll queue to empty. def close - return if closed? - ObjectSpace.undefine_finalizer(self) - - @native_kafka.close do - # We need to remove the topics references objects before we destroy the producer, - # otherwise they would leak out - @topics_refs_map.each_value do |refs| - refs.each_value do |ref| - Rdkafka::Bindings.rd_kafka_topic_destroy(ref) - end - end - end - - @topics_refs_map.clear - end - - # Whether this producer has closed - def closed? - @native_kafka.closed? - end - - # Wait until all outstanding producer requests are completed, with the given timeout - # in seconds. Call this before closing a producer to ensure delivery of all messages. - # - # @param timeout_ms [Integer] how long should we wait for flush of all messages - # @return [Boolean] true if no more data and all was flushed, false in case there are still - # outgoing messages after the timeout - # - # @note We raise an exception for other errors because based on the librdkafka docs, there - # should be no other errors. - # - # @note For `timed_out` we do not raise an error to keep it backwards compatible - def flush(timeout_ms=5_000) - closed_producer_check(__method__) - - code = nil - - @native_kafka.with_inner do |inner| - code = Rdkafka::Bindings.rd_kafka_flush(inner, timeout_ms) - end - - # Early skip not to build the error message - return true if code.zero? + ObjectSpace.undefine_finalizer(@id) - error = Rdkafka::RdkafkaError.new(code) + return unless @native_kafka - return false if error.code == :timed_out - - raise(error) - end - - # Purges the outgoing queue and releases all resources. - # - # Useful when closing the producer with outgoing messages to unstable clusters or when for - # any other reasons waiting cannot go on anymore. This purges both the queue and all the - # inflight requests + updates the delivery handles statuses so they can be materialized into - # `purge_queue` errors. - def purge - closed_producer_check(__method__) - - code = nil - - @native_kafka.with_inner do |inner| - code = Bindings.rd_kafka_purge( - inner, - Bindings::RD_KAFKA_PURGE_F_QUEUE | Bindings::RD_KAFKA_PURGE_F_INFLIGHT - ) - end - - code.zero? || raise(Rdkafka::RdkafkaError.new(code)) - - # Wait for the purge to affect everything - sleep(0.001) until flush(100) - - true + # Indicate to polling thread that we're closing + @closing = true + # Wait for the polling thread to finish up + @polling_thread.join + Rdkafka::Bindings.rd_kafka_destroy(@native_kafka) + @native_kafka = nil end # Partition count for a given topic. + # NOTE: If 'allow.auto.create.topics' is set to true in the broker, the topic will be auto-created after returning nil. # # @param topic [String] The topic name. - # @return [Integer] partition count for a given topic or `-1` if it could not be obtained. # - # @note If 'allow.auto.create.topics' is set to true in the broker, the topic will be - # auto-created after returning nil. + # @return partition count [Integer,nil] # - # @note We cache the partition count for a given topic for given time. - # This prevents us in case someone uses `partition_key` from querying for the count with - # each message. Instead we query once every 30 seconds at most if we have a valid partition - # count or every 5 seconds in case we were not able to obtain number of partitions def partition_count(topic) closed_producer_check(__method__) - - @_partitions_count_cache.delete_if do |_, cached| - monotonic_now - cached.first > PARTITIONS_COUNT_TTL - end - - @_partitions_count_cache[topic].last + Rdkafka::Metadata.new(@native_kafka, topic).topics&.first[:partition_count] end # Produces a message to a Kafka topic. The message is added to rdkafka's queue, call {DeliveryHandle#wait wait} on the returned delivery handle to make sure it is delivered. @@ -243,28 +75,15 @@ def partition_count(topic) # # @param topic [String] The topic to produce to # @param payload [String,nil] The message's payload - # @param key [String, nil] The message's key + # @param key [String] The message's key # @param partition [Integer,nil] Optional partition to produce to - # @param partition_key [String, nil] Optional partition key based on which partition assignment can happen # @param timestamp [Time,Integer,nil] Optional timestamp of this message. Integer timestamp is in milliseconds since Jan 1 1970. # @param headers [Hash] Optional message headers - # @param label [Object, nil] a label that can be assigned when producing a message that will be part of the delivery handle and the delivery report - # @param topic_config [Hash] topic config for given message dispatch. Allows to send messages to topics with different configuration - # - # @return [DeliveryHandle] Delivery handle that can be used to wait for the result of producing this message # # @raise [RdkafkaError] When adding the message to rdkafka's queue failed - def produce( - topic:, - payload: nil, - key: nil, - partition: nil, - partition_key: nil, - timestamp: nil, - headers: nil, - label: nil, - topic_config: EMPTY_HASH - ) + # + # @return [DeliveryHandle] Delivery handle that can be used to wait for the result of producing this message + def produce(topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil) closed_producer_check(__method__) # Start by checking and converting the input @@ -283,22 +102,10 @@ def produce( key.bytesize end - topic_config_hash = topic_config.hash - - # Checks if we have the rdkafka topic reference object ready. It saves us on object - # allocation and allows to use custom config on demand. - set_topic_config(topic, topic_config, topic_config_hash) unless @topics_refs_map.dig(topic, topic_config_hash) - topic_ref = @topics_refs_map.dig(topic, topic_config_hash) - if partition_key partition_count = partition_count(topic) - - # Check if there are no overrides for the partitioner and use the default one only when - # no per-topic is present. - partitioner_name = @topics_configs.dig(topic, topic_config_hash, :partitioner) || @partitioner_name - # If the topic is not present, set to -1 - partition = Rdkafka::Bindings.partitioner(partition_key, partition_count, @partitioner_name) if partition_count.positive? + partition = Rdkafka::Bindings.partitioner(partition_key, partition_count) if partition_count end # If partition is nil, use -1 to let librdafka set the partition randomly or @@ -318,8 +125,6 @@ def produce( end delivery_handle = DeliveryHandle.new - delivery_handle.label = label - delivery_handle.topic = topic delivery_handle[:pending] = true delivery_handle[:response] = -1 delivery_handle[:partition] = -1 @@ -327,7 +132,7 @@ def produce( DeliveryHandle.register(delivery_handle) args = [ - :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_RKT, :pointer, topic_ref, + :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_TOPIC, :string, topic, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_MSGFLAGS, :int, Rdkafka::Bindings::RD_KAFKA_MSG_F_COPY, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_VALUE, :buffer_in, payload, :size_t, payload_size, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_KEY, :buffer_in, key, :size_t, key_size, @@ -343,19 +148,17 @@ def produce( args << :int << Rdkafka::Bindings::RD_KAFKA_VTYPE_HEADER args << :string << key args << :pointer << value - args << :size_t << value.bytesize + args << :size_t << value.bytes.size end end args << :int << Rdkafka::Bindings::RD_KAFKA_VTYPE_END # Produce the message - response = @native_kafka.with_inner do |inner| - Rdkafka::Bindings.rd_kafka_producev( - inner, - *args - ) - end + response = Rdkafka::Bindings.rd_kafka_producev( + @native_kafka, + *args + ) # Raise error if the produce call was not successful if response != 0 @@ -366,41 +169,13 @@ def produce( delivery_handle end - # Calls (if registered) the delivery callback - # - # @param delivery_report [Producer::DeliveryReport] - # @param delivery_handle [Producer::DeliveryHandle] - def call_delivery_callback(delivery_report, delivery_handle) - return unless @delivery_callback - - case @delivery_callback_arity - when 0 - @delivery_callback.call - when 1 - @delivery_callback.call(delivery_report) - else - @delivery_callback.call(delivery_report, delivery_handle) - end - end - - # Figures out the arity of a given block/method - # - # @param callback [#call, Proc] - # @return [Integer] arity of the provided block/method - def arity(callback) - return callback.arity if callback.respond_to?(:arity) - - callback.method(:call).arity + # @private + def call_delivery_callback(delivery_handle) + @delivery_callback.call(delivery_handle) if @delivery_callback end - private - - # Ensures, no operations can happen on a closed producer - # - # @param method [Symbol] name of the method that invoked producer - # @raise [Rdkafka::ClosedProducerError] def closed_producer_check(method) - raise Rdkafka::ClosedProducerError.new(method) if closed? + raise Rdkafka::ClosedProducerError.new(method) if @native_kafka.nil? end end end diff --git a/lib/rdkafka/producer/delivery_handle.rb b/lib/rdkafka/producer/delivery_handle.rb index b73890c9..08d60b1f 100644 --- a/lib/rdkafka/producer/delivery_handle.rb +++ b/lib/rdkafka/producer/delivery_handle.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Producer # Handle to wait for a delivery report which is returned when @@ -8,15 +6,7 @@ class DeliveryHandle < Rdkafka::AbstractHandle layout :pending, :bool, :response, :int, :partition, :int, - :offset, :int64, - :topic_name, :pointer - - # @return [Object, nil] label set during message production or nil by default - attr_accessor :label - - # @return [String] topic where we are trying to send the message - # We use this instead of reading from `topic_name` pointer to save on memory allocations - attr_accessor :topic + :offset, :int64 # @return [String] the name of the operation (e.g. "delivery") def operation_name @@ -25,15 +15,7 @@ def operation_name # @return [DeliveryReport] a report on the delivery of the message def create_result - DeliveryReport.new( - self[:partition], - self[:offset], - # For part of errors, we will not get a topic name reference and in cases like this - # we should not return it - topic, - self[:response] != 0 ? RdkafkaError.new(self[:response]) : nil, - label - ) + DeliveryReport.new(self[:partition], self[:offset]) end end end diff --git a/lib/rdkafka/producer/delivery_report.rb b/lib/rdkafka/producer/delivery_report.rb index 01d29f98..b1d72d3b 100644 --- a/lib/rdkafka/producer/delivery_report.rb +++ b/lib/rdkafka/producer/delivery_report.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - module Rdkafka class Producer # Delivery report for a successfully produced message. @@ -12,34 +10,16 @@ class DeliveryReport # @return [Integer] attr_reader :offset - # The name of the topic this message was produced to or nil in case of reports with errors - # where topic was not reached. - # - # @return [String, nil] - attr_reader :topic_name - # Error in case happen during produce. - # @return [Integer] + # @return [string] attr_reader :error - # @return [Object, nil] label set during message production or nil by default - attr_reader :label - - # We alias the `#topic_name` under `#topic` to make this consistent with `Consumer::Message` - # where the topic name is under `#topic` method. That way we have a consistent name that - # is present in both places - # - # We do not remove the original `#topic_name` because of backwards compatibility - alias topic topic_name - private - def initialize(partition, offset, topic_name = nil, error = nil, label = nil) + def initialize(partition, offset, error = nil) @partition = partition @offset = offset - @topic_name = topic_name @error = error - @label = label end end end diff --git a/lib/rdkafka/version.rb b/lib/rdkafka/version.rb index dd0c6c02..81a0e66b 100644 --- a/lib/rdkafka/version.rb +++ b/lib/rdkafka/version.rb @@ -1,7 +1,5 @@ -# frozen_string_literal: true - module Rdkafka - VERSION = "0.17.0" - LIBRDKAFKA_VERSION = "2.4.0" - LIBRDKAFKA_SOURCE_SHA256 = "d645e47d961db47f1ead29652606a502bdd2a880c85c1e060e94eea040f1a19a" + VERSION = "0.10.1" + LIBRDKAFKA_VERSION = "1.5.0" + LIBRDKAFKA_SOURCE_SHA256 = "f7fee59fdbf1286ec23ef0b35b2dfb41031c8727c90ced6435b8cf576f23a656" end diff --git a/rdkafka.gemspec b/rdkafka.gemspec index 844ae9eb..8229fef3 100644 --- a/rdkafka.gemspec +++ b/rdkafka.gemspec @@ -1,13 +1,12 @@ -# frozen_string_literal: true - require File.expand_path('lib/rdkafka/version', __dir__) Gem::Specification.new do |gem| - gem.authors = ['Thijs Cadier', 'Maciej Mensfeld'] - gem.email = ["contact@karafka.io"] + gem.authors = ['Thijs Cadier'] + gem.email = ["thijs@appsignal.com"] gem.description = "Modern Kafka client library for Ruby based on librdkafka" - gem.summary = "The rdkafka gem is a modern Kafka client library for Ruby based on librdkafka. It wraps the production-ready C client using the ffi gem and targets Kafka 1.0+ and Ruby 2.7+." + gem.summary = "The rdkafka gem is a modern Kafka client library for Ruby based on librdkafka. It wraps the production-ready C client using the ffi gem and targets Kafka 1.0+ and Ruby 2.4+." gem.license = 'MIT' + gem.homepage = 'https://github.com/thijsc/rdkafka-ruby' gem.files = `git ls-files`.split($\) gem.executables = gem.files.grep(%r{^bin/}).map{ |f| File.basename(f) } @@ -15,32 +14,15 @@ Gem::Specification.new do |gem| gem.name = 'rdkafka' gem.require_paths = ['lib'] gem.version = Rdkafka::VERSION - gem.required_ruby_version = '>= 3.0' + gem.required_ruby_version = '>= 2.4' gem.extensions = %w(ext/Rakefile) - gem.cert_chain = %w[certs/cert_chain.pem] - - if $PROGRAM_NAME.end_with?('gem') - gem.signing_key = File.expand_path('~/.ssh/gem-private_key.pem') - end - gem.add_dependency 'ffi', '~> 1.15' - gem.add_dependency 'mini_portile2', '~> 2.6' - gem.add_dependency 'rake', '> 12' + gem.add_dependency 'ffi', '~> 1.9' + gem.add_dependency 'mini_portile2', '~> 2.1' + gem.add_dependency 'rake', '>= 12.3' - gem.add_development_dependency 'pry' + gem.add_development_dependency 'pry', '~> 0.10' gem.add_development_dependency 'rspec', '~> 3.5' - gem.add_development_dependency 'rake' - gem.add_development_dependency 'simplecov' - gem.add_development_dependency 'guard' - gem.add_development_dependency 'guard-rspec' - - gem.metadata = { - 'funding_uri' => 'https://karafka.io/#become-pro', - 'homepage_uri' => 'https://karafka.io', - 'changelog_uri' => 'https://github.com/karafka/rdkafka-ruby/blob/main/CHANGELOG.md', - 'bug_tracker_uri' => 'https://github.com/karafka/rdkafka-ruby/issues', - 'source_code_uri' => 'https://github.com/karafka/rdkafka-ruby', - 'documentation_uri' => 'https://github.com/karafka/rdkafka-ruby/blob/main/README.md', - 'rubygems_mfa_required' => 'true' - } + gem.add_development_dependency 'rake', '~> 12.0' + gem.add_development_dependency 'simplecov', '~> 0.15' end diff --git a/renovate.json b/renovate.json deleted file mode 100644 index 39a2b6e9..00000000 --- a/renovate.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "$schema": "https://docs.renovatebot.com/renovate-schema.json", - "extends": [ - "config:base" - ] -} diff --git a/spec/rdkafka/abstract_handle_spec.rb b/spec/rdkafka/abstract_handle_spec.rb index 1607c8e1..6b437e53 100644 --- a/spec/rdkafka/abstract_handle_spec.rb +++ b/spec/rdkafka/abstract_handle_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::AbstractHandle do let(:response) { 0 } @@ -76,51 +76,39 @@ def create_result end describe "#wait" do - context 'when pending_handle true' do - let(:pending_handle) { true } + let(:pending_handle) { true } - it "should wait until the timeout and then raise an error" do - expect(Kernel).not_to receive(:warn) - expect { - subject.wait(max_wait_timeout: 0.1) - }.to raise_error Rdkafka::AbstractHandle::WaitTimeoutError, /test_operation/ - end + it "should wait until the timeout and then raise an error" do + expect { + subject.wait(max_wait_timeout: 0.1) + }.to raise_error Rdkafka::AbstractHandle::WaitTimeoutError, /test_operation/ end - context 'when pending_handle false' do + context "when not pending anymore and no error" do let(:pending_handle) { false } + let(:result) { 1 } - it 'should show a deprecation warning when wait_timeout is set' do - expect(Kernel).to receive(:warn).with(Rdkafka::AbstractHandle::WAIT_TIMEOUT_DEPRECATION_MESSAGE) - subject.wait(wait_timeout: 0.1) + it "should return a result" do + wait_result = subject.wait + expect(wait_result).to eq(result) end - context "without error" do - let(:result) { 1 } - - it "should return a result" do - expect(Kernel).not_to receive(:warn) - wait_result = subject.wait - expect(wait_result).to eq(result) - end - - it "should wait without a timeout" do - expect(Kernel).not_to receive(:warn) - wait_result = subject.wait(max_wait_timeout: nil) - expect(wait_result).to eq(result) - end + it "should wait without a timeout" do + wait_result = subject.wait(max_wait_timeout: nil) + expect(wait_result).to eq(result) end + end - context "with error" do - let(:response) { 20 } + context "when not pending anymore and there was an error" do + let(:pending_handle) { false } + let(:response) { 20 } - it "should raise an rdkafka error" do - expect(Kernel).not_to receive(:warn) - expect { - subject.wait - }.to raise_error Rdkafka::RdkafkaError - end + it "should raise an rdkafka error" do + expect { + subject.wait + }.to raise_error Rdkafka::RdkafkaError end end end end + diff --git a/spec/rdkafka/admin/create_acl_handle_spec.rb b/spec/rdkafka/admin/create_acl_handle_spec.rb deleted file mode 100644 index 586bb097..00000000 --- a/spec/rdkafka/admin/create_acl_handle_spec.rb +++ /dev/null @@ -1,56 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::CreateAclHandle do - # If create acl was successful there is no error object - # the error code is set to RD_KAFKA_RESP_ERR_NO_ERRORa - # https://github.com/confluentinc/librdkafka/blob/1f9f245ac409f50f724695c628c7a0d54a763b9a/src/rdkafka_error.c#L169 - let(:response) { Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR } - - subject do - Rdkafka::Admin::CreateAclHandle.new.tap do |handle| - handle[:pending] = pending_handle - handle[:response] = response - # If create acl was successful there is no error object and the error_string is set to "" - # https://github.com/confluentinc/librdkafka/blob/1f9f245ac409f50f724695c628c7a0d54a763b9a/src/rdkafka_error.c#L178 - handle[:response_string] = FFI::MemoryPointer.from_string("") - end - end - - describe "#wait" do - let(:pending_handle) { true } - - it "should wait until the timeout and then raise an error" do - expect { - subject.wait(max_wait_timeout: 0.1) - }.to raise_error Rdkafka::Admin::CreateAclHandle::WaitTimeoutError, /create acl/ - end - - context "when not pending anymore and no error" do - let(:pending_handle) { false } - - it "should return a create acl report" do - report = subject.wait - - expect(report.rdkafka_response_string).to eq("") - end - - it "should wait without a timeout" do - report = subject.wait(max_wait_timeout: nil) - - expect(report.rdkafka_response_string).to eq("") - end - end - end - - describe "#raise_error" do - let(:pending_handle) { false } - - it "should raise the appropriate error" do - expect { - subject.raise_error - }.to raise_exception(Rdkafka::RdkafkaError, /Success \(no_error\)/) - end - end -end diff --git a/spec/rdkafka/admin/create_acl_report_spec.rb b/spec/rdkafka/admin/create_acl_report_spec.rb deleted file mode 100644 index 42b25a6a..00000000 --- a/spec/rdkafka/admin/create_acl_report_spec.rb +++ /dev/null @@ -1,18 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::CreateAclReport do - subject { Rdkafka::Admin::CreateAclReport.new( - rdkafka_response: Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR, - rdkafka_response_string: FFI::MemoryPointer.from_string("") - )} - - it "should get RD_KAFKA_RESP_ERR_NO_ERROR " do - expect(subject.rdkafka_response).to eq(0) - end - - it "should get empty string" do - expect(subject.rdkafka_response_string).to eq("") - end -end diff --git a/spec/rdkafka/admin/create_topic_handle_spec.rb b/spec/rdkafka/admin/create_topic_handle_spec.rb index 059ad1a2..52791228 100644 --- a/spec/rdkafka/admin/create_topic_handle_spec.rb +++ b/spec/rdkafka/admin/create_topic_handle_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Admin::CreateTopicHandle do let(:response) { 0 } diff --git a/spec/rdkafka/admin/create_topic_report_spec.rb b/spec/rdkafka/admin/create_topic_report_spec.rb index cb5e0ebf..d10bac3a 100644 --- a/spec/rdkafka/admin/create_topic_report_spec.rb +++ b/spec/rdkafka/admin/create_topic_report_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Admin::CreateTopicReport do subject { Rdkafka::Admin::CreateTopicReport.new( diff --git a/spec/rdkafka/admin/delete_acl_handle_spec.rb b/spec/rdkafka/admin/delete_acl_handle_spec.rb deleted file mode 100644 index eba56418..00000000 --- a/spec/rdkafka/admin/delete_acl_handle_spec.rb +++ /dev/null @@ -1,85 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::DeleteAclHandle do - let(:response) { Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR } - let(:resource_name) {"acl-test-topic"} - let(:resource_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC} - let(:resource_pattern_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL} - let(:principal) {"User:anonymous"} - let(:host) {"*"} - let(:operation) {Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ} - let(:permission_type) {Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW} - let(:delete_acl_ptr) {FFI::Pointer::NULL} - - subject do - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - delete_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBinding_new( - resource_type, - FFI::MemoryPointer.from_string(resource_name), - resource_pattern_type, - FFI::MemoryPointer.from_string(principal), - FFI::MemoryPointer.from_string(host), - operation, - permission_type, - error_buffer, - 256 - ) - if delete_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - pointer_array = [delete_acl_ptr] - delete_acls_array_ptr = FFI::MemoryPointer.new(:pointer) - delete_acls_array_ptr.write_array_of_pointer(pointer_array) - Rdkafka::Admin::DeleteAclHandle.new.tap do |handle| - handle[:pending] = pending_handle - handle[:response] = response - handle[:response_string] = FFI::MemoryPointer.from_string("") - handle[:matching_acls] = delete_acls_array_ptr - handle[:matching_acls_count] = 1 - end - end - - after do - if delete_acl_ptr != FFI::Pointer::NULL - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(delete_acl_ptr) - end - end - - describe "#wait" do - let(:pending_handle) { true } - - it "should wait until the timeout and then raise an error" do - expect { - subject.wait(max_wait_timeout: 0.1) - }.to raise_error Rdkafka::Admin::DeleteAclHandle::WaitTimeoutError, /delete acl/ - end - - context "when not pending anymore and no error" do - let(:pending_handle) { false } - - it "should return a delete acl report" do - report = subject.wait - - expect(report.deleted_acls.length).to eq(1) - end - - it "should wait without a timeout" do - report = subject.wait(max_wait_timeout: nil) - - expect(report.deleted_acls[0].matching_acl_resource_name).to eq(resource_name) - end - end - end - - describe "#raise_error" do - let(:pending_handle) { false } - - it "should raise the appropriate error" do - expect { - subject.raise_error - }.to raise_exception(Rdkafka::RdkafkaError, /Success \(no_error\)/) - end - end -end diff --git a/spec/rdkafka/admin/delete_acl_report_spec.rb b/spec/rdkafka/admin/delete_acl_report_spec.rb deleted file mode 100644 index 01bc5b59..00000000 --- a/spec/rdkafka/admin/delete_acl_report_spec.rb +++ /dev/null @@ -1,72 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::DeleteAclReport do - - let(:resource_name) {"acl-test-topic"} - let(:resource_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC} - let(:resource_pattern_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL} - let(:principal) {"User:anonymous"} - let(:host) {"*"} - let(:operation) {Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ} - let(:permission_type) {Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW} - let(:delete_acl_ptr) {FFI::Pointer::NULL} - - subject do - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - delete_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBinding_new( - resource_type, - FFI::MemoryPointer.from_string(resource_name), - resource_pattern_type, - FFI::MemoryPointer.from_string(principal), - FFI::MemoryPointer.from_string(host), - operation, - permission_type, - error_buffer, - 256 - ) - if delete_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - pointer_array = [delete_acl_ptr] - delete_acls_array_ptr = FFI::MemoryPointer.new(:pointer) - delete_acls_array_ptr.write_array_of_pointer(pointer_array) - Rdkafka::Admin::DeleteAclReport.new(matching_acls: delete_acls_array_ptr, matching_acls_count: 1) - end - - after do - if delete_acl_ptr != FFI::Pointer::NULL - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(delete_acl_ptr) - end - end - - it "should get deleted acl resource type as Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC" do - expect(subject.deleted_acls[0].matching_acl_resource_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC) - end - - it "should get deleted acl resource name as acl-test-topic" do - expect(subject.deleted_acls[0].matching_acl_resource_name).to eq(resource_name) - end - - it "should get deleted acl resource pattern type as Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL" do - expect(subject.deleted_acls[0].matching_acl_resource_pattern_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL) - expect(subject.deleted_acls[0].matching_acl_pattern_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL) - end - - it "should get deleted acl principal as User:anonymous" do - expect(subject.deleted_acls[0].matching_acl_principal).to eq("User:anonymous") - end - - it "should get deleted acl host as * " do - expect(subject.deleted_acls[0].matching_acl_host).to eq("*") - end - - it "should get deleted acl operation as Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ" do - expect(subject.deleted_acls[0].matching_acl_operation).to eq(Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ) - end - - it "should get deleted acl permission_type as Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW" do - expect(subject.deleted_acls[0].matching_acl_permission_type).to eq(Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW) - end -end diff --git a/spec/rdkafka/admin/delete_topic_handle_spec.rb b/spec/rdkafka/admin/delete_topic_handle_spec.rb index 95ae2155..6c5ddfdb 100644 --- a/spec/rdkafka/admin/delete_topic_handle_spec.rb +++ b/spec/rdkafka/admin/delete_topic_handle_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Admin::DeleteTopicHandle do let(:response) { 0 } diff --git a/spec/rdkafka/admin/delete_topic_report_spec.rb b/spec/rdkafka/admin/delete_topic_report_spec.rb index 77fbfb46..37036786 100644 --- a/spec/rdkafka/admin/delete_topic_report_spec.rb +++ b/spec/rdkafka/admin/delete_topic_report_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Admin::DeleteTopicReport do subject { Rdkafka::Admin::DeleteTopicReport.new( diff --git a/spec/rdkafka/admin/describe_acl_handle_spec.rb b/spec/rdkafka/admin/describe_acl_handle_spec.rb deleted file mode 100644 index 7c74cdc7..00000000 --- a/spec/rdkafka/admin/describe_acl_handle_spec.rb +++ /dev/null @@ -1,85 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::DescribeAclHandle do - let(:response) { Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR } - let(:resource_name) {"acl-test-topic"} - let(:resource_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC} - let(:resource_pattern_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL} - let(:principal) {"User:anonymous"} - let(:host) {"*"} - let(:operation) {Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ} - let(:permission_type) {Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW} - let(:describe_acl_ptr) {FFI::Pointer::NULL} - - subject do - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - describe_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBinding_new( - resource_type, - FFI::MemoryPointer.from_string(resource_name), - resource_pattern_type, - FFI::MemoryPointer.from_string(principal), - FFI::MemoryPointer.from_string(host), - operation, - permission_type, - error_buffer, - 256 - ) - if describe_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - pointer_array = [describe_acl_ptr] - describe_acls_array_ptr = FFI::MemoryPointer.new(:pointer) - describe_acls_array_ptr.write_array_of_pointer(pointer_array) - Rdkafka::Admin::DescribeAclHandle.new.tap do |handle| - handle[:pending] = pending_handle - handle[:response] = response - handle[:response_string] = FFI::MemoryPointer.from_string("") - handle[:acls] = describe_acls_array_ptr - handle[:acls_count] = 1 - end - end - - after do - if describe_acl_ptr != FFI::Pointer::NULL - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(describe_acl_ptr) - end - end - - describe "#wait" do - let(:pending_handle) { true } - - it "should wait until the timeout and then raise an error" do - expect { - subject.wait(max_wait_timeout: 0.1) - }.to raise_error Rdkafka::Admin::DescribeAclHandle::WaitTimeoutError, /describe acl/ - end - - context "when not pending anymore and no error" do - let(:pending_handle) { false } - - it "should return a describe acl report" do - report = subject.wait - - expect(report.acls.length).to eq(1) - end - - it "should wait without a timeout" do - report = subject.wait(max_wait_timeout: nil) - - expect(report.acls[0].matching_acl_resource_name).to eq("acl-test-topic") - end - end - end - - describe "#raise_error" do - let(:pending_handle) { false } - - it "should raise the appropriate error" do - expect { - subject.raise_error - }.to raise_exception(Rdkafka::RdkafkaError, /Success \(no_error\)/) - end - end -end diff --git a/spec/rdkafka/admin/describe_acl_report_spec.rb b/spec/rdkafka/admin/describe_acl_report_spec.rb deleted file mode 100644 index f251cbfe..00000000 --- a/spec/rdkafka/admin/describe_acl_report_spec.rb +++ /dev/null @@ -1,73 +0,0 @@ -# frozen_string_literal: true - -require "spec_helper" - -describe Rdkafka::Admin::DescribeAclReport do - - let(:resource_name) {"acl-test-topic"} - let(:resource_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC} - let(:resource_pattern_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL} - let(:principal) {"User:anonymous"} - let(:host) {"*"} - let(:operation) {Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ} - let(:permission_type) {Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW} - let(:describe_acl_ptr) {FFI::Pointer::NULL} - - subject do - error_buffer = FFI::MemoryPointer.from_string(" " * 256) - describe_acl_ptr = Rdkafka::Bindings.rd_kafka_AclBinding_new( - resource_type, - FFI::MemoryPointer.from_string(resource_name), - resource_pattern_type, - FFI::MemoryPointer.from_string(principal), - FFI::MemoryPointer.from_string(host), - operation, - permission_type, - error_buffer, - 256 - ) - if describe_acl_ptr.null? - raise Rdkafka::Config::ConfigError.new(error_buffer.read_string) - end - pointer_array = [describe_acl_ptr] - describe_acls_array_ptr = FFI::MemoryPointer.new(:pointer) - describe_acls_array_ptr.write_array_of_pointer(pointer_array) - Rdkafka::Admin::DescribeAclReport.new(acls: describe_acls_array_ptr, acls_count: 1) - end - - after do - if describe_acl_ptr != FFI::Pointer::NULL - Rdkafka::Bindings.rd_kafka_AclBinding_destroy(describe_acl_ptr) - end - end - - - it "should get matching acl resource type as Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC" do - expect(subject.acls[0].matching_acl_resource_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC) - end - - it "should get matching acl resource name as acl-test-topic" do - expect(subject.acls[0].matching_acl_resource_name).to eq(resource_name) - end - - it "should get matching acl resource pattern type as Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL" do - expect(subject.acls[0].matching_acl_resource_pattern_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL) - expect(subject.acls[0].matching_acl_pattern_type).to eq(Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL) - end - - it "should get matching acl principal as User:anonymous" do - expect(subject.acls[0].matching_acl_principal).to eq("User:anonymous") - end - - it "should get matching acl host as * " do - expect(subject.acls[0].matching_acl_host).to eq("*") - end - - it "should get matching acl operation as Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ" do - expect(subject.acls[0].matching_acl_operation).to eq(Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ) - end - - it "should get matching acl permission_type as Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW" do - expect(subject.acls[0].matching_acl_permission_type).to eq(Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW) - end -end diff --git a/spec/rdkafka/admin_spec.rb b/spec/rdkafka/admin_spec.rb index 5fc1bcd0..6127e8a0 100644 --- a/spec/rdkafka/admin_spec.rb +++ b/spec/rdkafka/admin_spec.rb @@ -1,56 +1,21 @@ -# frozen_string_literal: true - +require "spec_helper" require "ostruct" describe Rdkafka::Admin do - let(:config) { rdkafka_config } - let(:admin) { config.admin } + let(:config) { rdkafka_config } + let(:admin) { config.admin } after do # Registry should always end up being empty expect(Rdkafka::Admin::CreateTopicHandle::REGISTRY).to be_empty - expect(Rdkafka::Admin::CreatePartitionsHandle::REGISTRY).to be_empty - expect(Rdkafka::Admin::DescribeAclHandle::REGISTRY).to be_empty - expect(Rdkafka::Admin::CreateAclHandle::REGISTRY).to be_empty - expect(Rdkafka::Admin::DeleteAclHandle::REGISTRY).to be_empty admin.close end - let(:topic_name) { "test-topic-#{SecureRandom.uuid}" } + let(:topic_name) { "test-topic-#{Random.new.rand(0..1_000_000)}" } let(:topic_partition_count) { 3 } let(:topic_replication_factor) { 1 } let(:topic_config) { {"cleanup.policy" => "compact", "min.cleanable.dirty.ratio" => 0.8} } let(:invalid_topic_config) { {"cleeeeenup.policee" => "campact"} } - let(:group_name) { "test-group-#{SecureRandom.uuid}" } - - let(:resource_name) {"acl-test-topic"} - let(:resource_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_TOPIC} - let(:resource_pattern_type) {Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_LITERAL} - let(:principal) {"User:anonymous"} - let(:host) {"*"} - let(:operation) {Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_READ} - let(:permission_type) {Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ALLOW} - - describe '#describe_errors' do - let(:errors) { admin.class.describe_errors } - - it { expect(errors.size).to eq(168) } - it { expect(errors[-184]).to eq(code: -184, description: 'Local: Queue full', name: '_QUEUE_FULL') } - it { expect(errors[21]).to eq(code: 21, description: 'Broker: Invalid required acks value', name: 'INVALID_REQUIRED_ACKS') } - end - - describe 'admin without auto-start' do - let(:admin) { config.admin(native_kafka_auto_start: false) } - - it 'expect to be able to start it later and close' do - admin.start - admin.close - end - - it 'expect to be able to close it without starting' do - admin.close - end - end describe "#create_topic" do describe "called with invalid input" do @@ -66,7 +31,7 @@ }.to raise_exception { |ex| expect(ex).to be_a(Rdkafka::RdkafkaError) expect(ex.message).to match(/Broker: Invalid topic \(topic_exception\)/) -expect(ex.broker_message).to match(/Topic name.*is invalid: .* contains one or more characters other than ASCII alphanumerics, '.', '_' and '-'/) + expect(ex.broker_message).to match(/Topic name.*is illegal, it contains a character other than ASCII alphanumerics/) } end end @@ -87,7 +52,7 @@ end describe "with an invalid partition count" do - let(:topic_partition_count) { -999 } + let(:topic_partition_count) { -1 } it "raises an exception" do expect { @@ -150,275 +115,6 @@ end end - describe "describe_configs" do - subject(:resources_results) { admin.describe_configs(resources).wait.resources } - - before do - admin.create_topic(topic_name, 2, 1).wait - sleep(1) - end - - context 'when describing config of an existing topic' do - let(:resources) { [{ resource_type: 2, resource_name: topic_name }] } - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq(topic_name) - expect(resources_results.first.configs.size).to be > 25 - expect(resources_results.first.configs.first.name).to eq('compression.type') - expect(resources_results.first.configs.first.value).to eq('producer') - expect(resources_results.first.configs.map(&:synonyms)).not_to be_empty - end - end - - context 'when describing config of a non-existing topic' do - let(:resources) { [{ resource_type: 2, resource_name: SecureRandom.uuid }] } - - it 'expect to raise error' do - expect { resources_results }.to raise_error(Rdkafka::RdkafkaError, /unknown_topic_or_part/) - end - end - - context 'when describing both existing and non-existing topics' do - let(:resources) do - [ - { resource_type: 2, resource_name: topic_name }, - { resource_type: 2, resource_name: SecureRandom.uuid } - ] - end - - it 'expect to raise error' do - expect { resources_results }.to raise_error(Rdkafka::RdkafkaError, /unknown_topic_or_part/) - end - end - - context 'when describing multiple existing topics' do - let(:resources) do - [ - { resource_type: 2, resource_name: 'example_topic' }, - { resource_type: 2, resource_name: topic_name } - ] - end - - it do - expect(resources_results.size).to eq(2) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq('example_topic') - expect(resources_results.last.type).to eq(2) - expect(resources_results.last.name).to eq(topic_name) - end - end - - context 'when trying to describe invalid resource type' do - let(:resources) { [{ resource_type: 0, resource_name: SecureRandom.uuid }] } - - it 'expect to raise error' do - expect { resources_results }.to raise_error(Rdkafka::RdkafkaError, /invalid_request/) - end - end - - context 'when trying to describe invalid broker' do - let(:resources) { [{ resource_type: 4, resource_name: 'non-existing' }] } - - it 'expect to raise error' do - expect { resources_results }.to raise_error(Rdkafka::RdkafkaError, /invalid_arg/) - end - end - - context 'when trying to describe valid broker' do - let(:resources) { [{ resource_type: 4, resource_name: '1' }] } - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(4) - expect(resources_results.first.name).to eq('1') - expect(resources_results.first.configs.size).to be > 230 - expect(resources_results.first.configs.first.name).to eq('log.cleaner.min.compaction.lag.ms') - expect(resources_results.first.configs.first.value).to eq('0') - expect(resources_results.first.configs.map(&:synonyms)).not_to be_empty - end - end - - context 'when describing valid broker with topics in one request' do - let(:resources) do - [ - { resource_type: 4, resource_name: '1' }, - { resource_type: 2, resource_name: topic_name } - ] - end - - it do - expect(resources_results.size).to eq(2) - expect(resources_results.first.type).to eq(4) - expect(resources_results.first.name).to eq('1') - expect(resources_results.first.configs.size).to be > 230 - expect(resources_results.first.configs.first.name).to eq('log.cleaner.min.compaction.lag.ms') - expect(resources_results.first.configs.first.value).to eq('0') - expect(resources_results.last.type).to eq(2) - expect(resources_results.last.name).to eq(topic_name) - expect(resources_results.last.configs.size).to be > 25 - expect(resources_results.last.configs.first.name).to eq('compression.type') - expect(resources_results.last.configs.first.value).to eq('producer') - end - end - end - - describe "incremental_alter_configs" do - subject(:resources_results) { admin.incremental_alter_configs(resources_with_configs).wait.resources } - - before do - admin.create_topic(topic_name, 2, 1).wait - sleep(1) - end - - context 'when altering one topic with one valid config via set' do - let(:target_retention) { (86400002 + rand(10_000)).to_s } - let(:resources_with_configs) do - [ - { - resource_type: 2, - resource_name: topic_name, - configs: [ - { - name: 'delete.retention.ms', - value: target_retention, - op_type: 0 - } - ] - } - ] - end - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq(topic_name) - - ret_config = admin.describe_configs(resources_with_configs).wait.resources.first.configs.find do |config| - config.name == 'delete.retention.ms' - end - - expect(ret_config.value).to eq(target_retention) - end - end - - context 'when altering one topic with one valid config via delete' do - let(:target_retention) { (8640002 + rand(10_000)).to_s } - let(:resources_with_configs) do - [ - { - resource_type: 2, - resource_name: topic_name, - configs: [ - { - name: 'delete.retention.ms', - value: target_retention, - op_type: 1 - } - ] - } - ] - end - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq(topic_name) - ret_config = admin.describe_configs(resources_with_configs).wait.resources.first.configs.find do |config| - config.name == 'delete.retention.ms' - end - - expect(ret_config.value).to eq('86400000') - end - end - - context 'when altering one topic with one valid config via append' do - let(:target_policy) { 'compact' } - let(:resources_with_configs) do - [ - { - resource_type: 2, - resource_name: topic_name, - configs: [ - { - name: 'cleanup.policy', - value: target_policy, - op_type: 2 - } - ] - } - ] - end - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq(topic_name) - - ret_config = admin.describe_configs(resources_with_configs).wait.resources.first.configs.find do |config| - config.name == 'cleanup.policy' - end - - expect(ret_config.value).to eq("delete,#{target_policy}") - end - end - - context 'when altering one topic with one valid config via subtrack' do - let(:target_policy) { 'delete' } - let(:resources_with_configs) do - [ - { - resource_type: 2, - resource_name: topic_name, - configs: [ - { - name: 'cleanup.policy', - value: target_policy, - op_type: 3 - } - ] - } - ] - end - - it do - expect(resources_results.size).to eq(1) - expect(resources_results.first.type).to eq(2) - expect(resources_results.first.name).to eq(topic_name) - - ret_config = admin.describe_configs(resources_with_configs).wait.resources.first.configs.find do |config| - config.name == 'cleanup.policy' - end - - expect(ret_config.value).to eq('') - end - end - - context 'when altering one topic with invalid config' do - let(:target_retention) { '-10' } - let(:resources_with_configs) do - [ - { - resource_type: 2, - resource_name: topic_name, - configs: [ - { - name: 'delete.retention.ms', - value: target_retention, - op_type: 0 - } - ] - } - ] - end - - it 'expect to raise error' do - expect { resources_results }.to raise_error(Rdkafka::RdkafkaError, /invalid_config/) - end - end - end - describe "#delete_topic" do describe "called with invalid input" do # https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/internals/Topic.java#L29 @@ -478,6 +174,7 @@ end end + it "deletes a topic that was newly created" do create_topic_handle = admin.create_topic(topic_name, topic_partition_count, topic_replication_factor) create_topic_report = create_topic_handle.wait(max_wait_timeout: 15.0) @@ -503,238 +200,4 @@ expect(delete_topic_report.result_name).to eq(topic_name) end end - - describe "#ACL tests" do - let(:non_existing_resource_name) {"non-existing-topic"} - before do - #create topic for testing acl - create_topic_handle = admin.create_topic(resource_name, topic_partition_count, topic_replication_factor) - create_topic_report = create_topic_handle.wait(max_wait_timeout: 15.0) - end - - after do - #delete acl - delete_acl_handle = admin.delete_acl(resource_type: resource_type, resource_name: resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - delete_acl_report = delete_acl_handle.wait(max_wait_timeout: 15.0) - - #delete topic that was created for testing acl - delete_topic_handle = admin.delete_topic(resource_name) - delete_topic_report = delete_topic_handle.wait(max_wait_timeout: 15.0) - end - - describe "#create_acl" do - it "create acl for a topic that does not exist" do - # acl creation for resources that does not exist will still get created successfully. - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: non_existing_resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - - # delete the acl that was created for a non existing topic" - delete_acl_handle = admin.delete_acl(resource_type: resource_type, resource_name: non_existing_resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - delete_acl_report = delete_acl_handle.wait(max_wait_timeout: 15.0) - expect(delete_acl_handle[:response]).to eq(0) - expect(delete_acl_report.deleted_acls.size).to eq(1) - end - - it "creates a acl for topic that was newly created" do - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - end - end - - describe "#describe_acl" do - it "describe acl of a topic that does not exist" do - describe_acl_handle = admin.describe_acl(resource_type: resource_type, resource_name: non_existing_resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - describe_acl_report = describe_acl_handle.wait(max_wait_timeout: 15.0) - expect(describe_acl_handle[:response]).to eq(0) - expect(describe_acl_report.acls.size).to eq(0) - end - - it "create acls and describe the newly created acls" do - #create_acl - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: "test_acl_topic_1", resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: "test_acl_topic_2", resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - - # Since we create and immediately check, this is slow on loaded CIs, hence we wait - sleep(2) - - #describe_acl - describe_acl_handle = admin.describe_acl(resource_type: Rdkafka::Bindings::RD_KAFKA_RESOURCE_ANY, resource_name: nil, resource_pattern_type: Rdkafka::Bindings::RD_KAFKA_RESOURCE_PATTERN_ANY, principal: nil, host: nil, operation: Rdkafka::Bindings::RD_KAFKA_ACL_OPERATION_ANY, permission_type: Rdkafka::Bindings::RD_KAFKA_ACL_PERMISSION_TYPE_ANY) - describe_acl_report = describe_acl_handle.wait(max_wait_timeout: 15.0) - expect(describe_acl_handle[:response]).to eq(0) - expect(describe_acl_report.acls.length).to eq(2) - end - end - - describe "#delete_acl" do - it "delete acl of a topic that does not exist" do - delete_acl_handle = admin.delete_acl(resource_type: resource_type, resource_name: non_existing_resource_name, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - delete_acl_report = delete_acl_handle.wait(max_wait_timeout: 15.0) - expect(delete_acl_handle[:response]).to eq(0) - expect(delete_acl_report.deleted_acls.size).to eq(0) - end - - it "create an acl and delete the newly created acl" do - #create_acl - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: "test_acl_topic_1", resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - - create_acl_handle = admin.create_acl(resource_type: resource_type, resource_name: "test_acl_topic_2", resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - create_acl_report = create_acl_handle.wait(max_wait_timeout: 15.0) - expect(create_acl_report.rdkafka_response).to eq(0) - expect(create_acl_report.rdkafka_response_string).to eq("") - - #delete_acl - resource_name nil - to delete all acls with any resource name and matching all other filters. - delete_acl_handle = admin.delete_acl(resource_type: resource_type, resource_name: nil, resource_pattern_type: resource_pattern_type, principal: principal, host: host, operation: operation, permission_type: permission_type) - delete_acl_report = delete_acl_handle.wait(max_wait_timeout: 15.0) - expect(delete_acl_handle[:response]).to eq(0) - expect(delete_acl_report.deleted_acls.length).to eq(2) - - end - end - end - - describe('Group tests') do - describe "#delete_group" do - describe("with an existing group") do - let(:consumer_config) { rdkafka_consumer_config('group.id': group_name) } - let(:producer_config) { rdkafka_producer_config } - let(:producer) { producer_config.producer } - let(:consumer) { consumer_config.consumer } - - before do - # Create a topic, post a message to it, consume it and commit offsets, this will create a group that we can then delete. - admin.create_topic(topic_name, topic_partition_count, topic_replication_factor).wait(max_wait_timeout: 15.0) - - producer.produce(topic: topic_name, payload: "test", key: "test").wait(max_wait_timeout: 15.0) - - consumer.subscribe(topic_name) - wait_for_assignment(consumer) - message = consumer.poll(100) - - expect(message).to_not be_nil - - consumer.commit - consumer.close - end - - after do - producer.close - consumer.close - end - - it "deletes the group" do - delete_group_handle = admin.delete_group(group_name) - report = delete_group_handle.wait(max_wait_timeout: 15.0) - - expect(report.result_name).to eql(group_name) - end - end - - describe "called with invalid input" do - describe "with the name of a group that does not exist" do - it "raises an exception" do - delete_group_handle = admin.delete_group(group_name) - - expect { - delete_group_handle.wait(max_wait_timeout: 15.0) - }.to raise_exception { |ex| - expect(ex).to be_a(Rdkafka::RdkafkaError) - expect(ex.message).to match(/Broker: The group id does not exist \(group_id_not_found\)/) - } - end - end - end - - end - end - - describe '#create_partitions' do - let(:metadata) { admin.metadata(topic_name).topics.first } - - context 'when topic does not exist' do - it 'expect to fail due to unknown partition' do - expect { admin.create_partitions(topic_name, 10).wait }.to raise_error(Rdkafka::RdkafkaError, /unknown_topic_or_part/) - end - end - - context 'when topic already has the desired number of partitions' do - before { admin.create_topic(topic_name, 2, 1).wait } - - it 'expect not to change number of partitions' do - expect { admin.create_partitions(topic_name, 2).wait }.to raise_error(Rdkafka::RdkafkaError, /invalid_partitions/) - expect(metadata[:partition_count]).to eq(2) - end - end - - context 'when topic has more than the requested number of partitions' do - before { admin.create_topic(topic_name, 5, 1).wait } - - it 'expect not to change number of partitions' do - expect { admin.create_partitions(topic_name, 2).wait }.to raise_error(Rdkafka::RdkafkaError, /invalid_partitions/) - expect(metadata[:partition_count]).to eq(5) - end - end - - context 'when topic has less then desired number of partitions' do - before do - admin.create_topic(topic_name, 1, 1).wait - sleep(1) - end - - it 'expect to change number of partitions' do - admin.create_partitions(topic_name, 10).wait - expect(metadata[:partition_count]).to eq(10) - end - end - end - - describe '#oauthbearer_set_token' do - context 'when sasl not configured' do - it 'should return RD_KAFKA_RESP_ERR__STATE' do - response = admin.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(Rdkafka::Bindings::RD_KAFKA_RESP_ERR__STATE) - end - end - - context 'when sasl configured' do - before do - config_sasl = rdkafka_config( - "security.protocol": "sasl_ssl", - "sasl.mechanisms": 'OAUTHBEARER' - ) - $admin_sasl = config_sasl.admin - end - - after do - $admin_sasl.close - end - - it 'should succeed' do - - response = $admin_sasl.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(0) - end - end - end end diff --git a/spec/rdkafka/bindings_spec.rb b/spec/rdkafka/bindings_spec.rb index 5d569664..8f834f9c 100644 --- a/spec/rdkafka/bindings_spec.rb +++ b/spec/rdkafka/bindings_spec.rb @@ -1,5 +1,4 @@ -# frozen_string_literal: true - +require "spec_helper" require 'zlib' describe Rdkafka::Bindings do @@ -36,16 +35,6 @@ expect(log_queue).to have_received(:<<).with([Logger::FATAL, "rdkafka: log line"]) end - it "should log fatal messages" do - Rdkafka::Bindings::LogCallback.call(nil, 1, nil, "log line") - expect(log_queue).to have_received(:<<).with([Logger::FATAL, "rdkafka: log line"]) - end - - it "should log fatal messages" do - Rdkafka::Bindings::LogCallback.call(nil, 2, nil, "log line") - expect(log_queue).to have_received(:<<).with([Logger::FATAL, "rdkafka: log line"]) - end - it "should log error messages" do Rdkafka::Bindings::LogCallback.call(nil, 3, nil, "log line") expect(log_queue).to have_received(:<<).with([Logger::ERROR, "rdkafka: log line"]) @@ -61,11 +50,6 @@ expect(log_queue).to have_received(:<<).with([Logger::INFO, "rdkafka: log line"]) end - it "should log info messages" do - Rdkafka::Bindings::LogCallback.call(nil, 6, nil, "log line") - expect(log_queue).to have_received(:<<).with([Logger::INFO, "rdkafka: log line"]) - end - it "should log debug messages" do Rdkafka::Bindings::LogCallback.call(nil, 7, nil, "log line") expect(log_queue).to have_received(:<<).with([Logger::DEBUG, "rdkafka: log line"]) @@ -92,13 +76,6 @@ result_2 = (Zlib.crc32(partition_key) % partition_count) expect(result_1).to eq(result_2) end - - it "should return the partition calculated by the specified partitioner" do - result_1 = Rdkafka::Bindings.partitioner(partition_key, partition_count, "murmur2") - ptr = FFI::MemoryPointer.from_string(partition_key) - result_2 = Rdkafka::Bindings.rd_kafka_msg_partitioner_murmur2(nil, ptr, partition_key.size, partition_count, nil, nil) - expect(result_1).to eq(result_2) - end end describe "stats callback" do @@ -147,86 +124,4 @@ end end end - - describe "oauthbearer set token" do - - context "without args" do - it "should raise argument error" do - expect { - Rdkafka::Bindings.rd_kafka_oauthbearer_set_token - }.to raise_error(ArgumentError) - end - end - - context "with args" do - before do - DEFAULT_TOKEN_EXPIRY_SECONDS = 900 - $token_value = "token" - $md_lifetime_ms = Time.now.to_i*1000 + DEFAULT_TOKEN_EXPIRY_SECONDS * 1000 - $md_principal_name = "kafka-cluster" - $extensions = nil - $extension_size = 0 - $error_buffer = FFI::MemoryPointer.from_string(" " * 256) - end - - it "should set token or capture failure" do - RdKafkaTestConsumer.with do |consumer_ptr| - response = Rdkafka::Bindings.rd_kafka_oauthbearer_set_token(consumer_ptr, $token_value, $md_lifetime_ms, $md_principal_name, $extensions, $extension_size, $error_buffer, 256) - expect(response).to eq(Rdkafka::Bindings::RD_KAFKA_RESP_ERR__STATE) - expect($error_buffer.read_string).to eq("SASL/OAUTHBEARER is not the configured authentication mechanism") - end - end - end - end - - describe "oauthbearer set token failure" do - - context "without args" do - - it "should fail" do - expect { - Rdkafka::Bindings.rd_kafka_oauthbearer_set_token_failure - }.to raise_error(ArgumentError) - end - end - - context "with args" do - it "should succeed" do - expect { - errstr = "error" - RdKafkaTestConsumer.with do |consumer_ptr| - Rdkafka::Bindings.rd_kafka_oauthbearer_set_token_failure(consumer_ptr, errstr) - end - }.to_not raise_error - end - end - end - - describe "oauthbearer callback" do - - context "without an oauthbearer callback" do - it "should do nothing" do - expect { - Rdkafka::Bindings::OAuthbearerTokenRefreshCallback.call(nil, "", nil) - }.not_to raise_error - end - end - - context "with an oauthbearer callback" do - before do - Rdkafka::Config.oauthbearer_token_refresh_callback = lambda do |config, client_name| - $received_config = config - $received_client_name = client_name - end - end - - it "should call the oauth bearer callback and receive config and client name" do - RdKafkaTestConsumer.with do |consumer_ptr| - Rdkafka::Bindings::OAuthbearerTokenRefreshCallback.call(consumer_ptr, "{}", nil) - expect($received_config).to eq("{}") - expect($received_client_name).to match(/consumer/) - end - end - end - end end diff --git a/spec/rdkafka/callbacks_spec.rb b/spec/rdkafka/callbacks_spec.rb index 93126672..49d66306 100644 --- a/spec/rdkafka/callbacks_spec.rb +++ b/spec/rdkafka/callbacks_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Callbacks do diff --git a/spec/rdkafka/config_spec.rb b/spec/rdkafka/config_spec.rb index a188d858..5d2ff8c8 100644 --- a/spec/rdkafka/config_spec.rb +++ b/spec/rdkafka/config_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Config do context "logger" do @@ -22,7 +22,6 @@ it "supports logging queue" do log = StringIO.new Rdkafka::Config.logger = Logger.new(log) - Rdkafka::Config.ensure_log_thread Rdkafka::Config.log_queue << [Logger::FATAL, "I love testing"] 20.times do @@ -32,25 +31,6 @@ expect(log.string).to include "FATAL -- : I love testing" end - - it "expect to start new logger thread after fork and work" do - reader, writer = IO.pipe - - pid = fork do - $stdout.reopen(writer) - Rdkafka::Config.logger = Logger.new($stdout) - reader.close - producer = rdkafka_producer_config(debug: 'all').producer - producer.close - writer.close - sleep(1) - end - - writer.close - Process.wait(pid) - output = reader.read - expect(output.split("\n").size).to be >= 20 - end end context "statistics callback" do @@ -115,39 +95,6 @@ def call(stats); end end end - context "oauthbearer calllback" do - context "with a proc/lambda" do - it "should set the callback" do - expect { - Rdkafka::Config.oauthbearer_token_refresh_callback = lambda do |config, client_name| - puts config - puts client_name - end - }.not_to raise_error - expect(Rdkafka::Config.oauthbearer_token_refresh_callback).to respond_to :call - end - end - - context "with a callable object" do - it "should set the callback" do - callback = Class.new do - def call(config, client_name); end - end - - expect { - Rdkafka::Config.oauthbearer_token_refresh_callback = callback.new - }.not_to raise_error - expect(Rdkafka::Config.oauthbearer_token_refresh_callback).to respond_to :call - end - end - - it "should not accept a callback that's not callable" do - expect { - Rdkafka::Config.oauthbearer_token_refresh_callback = 'not a callback' - }.to raise_error(TypeError) - end - end - context "configuration" do it "should store configuration" do config = Rdkafka::Config.new @@ -161,15 +108,7 @@ def call(config, client_name); end end it "should create a consumer with valid config" do - consumer = rdkafka_consumer_config.consumer - expect(consumer).to be_a Rdkafka::Consumer - consumer.close - end - - it "should create a consumer with consumer_poll_set set to false" do - config = rdkafka_consumer_config - config.consumer_poll_set = false - consumer = config.consumer + consumer = rdkafka_config.consumer expect(consumer).to be_a Rdkafka::Consumer consumer.close end @@ -197,7 +136,7 @@ def call(config, client_name); end end it "should create a producer with valid config" do - producer = rdkafka_consumer_config.producer + producer = rdkafka_config.producer expect(producer).to be_a Rdkafka::Producer producer.close end @@ -209,24 +148,11 @@ def call(config, client_name); end }.to raise_error(Rdkafka::Config::ConfigError, "No such configuration property: \"invalid.key\"") end - it "allows string partitioner key" do - expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2").and_call_original - config = Rdkafka::Config.new("partitioner" => "murmur2") - config.producer.close - end - - it "allows symbol partitioner key" do - expect(Rdkafka::Producer).to receive(:new).with(kind_of(Rdkafka::NativeKafka), "murmur2").and_call_original - config = Rdkafka::Config.new(:partitioner => "murmur2") - config.producer.close - end - it "should allow configuring zstd compression" do config = Rdkafka::Config.new('compression.codec' => 'zstd') begin - producer = config.producer - expect(producer).to be_a Rdkafka::Producer - producer.close + expect(config.producer).to be_a Rdkafka::Producer + config.producer.close rescue Rdkafka::Config::ConfigError => ex pending "Zstd compression not supported on this machine" raise ex diff --git a/spec/rdkafka/consumer/headers_spec.rb b/spec/rdkafka/consumer/headers_spec.rb deleted file mode 100644 index f467ec6d..00000000 --- a/spec/rdkafka/consumer/headers_spec.rb +++ /dev/null @@ -1,57 +0,0 @@ -# frozen_string_literal: true - -describe Rdkafka::Consumer::Headers do - let(:headers) do - { # Note String keys! - "version" => "2.1.3", - "type" => "String" - } - end - let(:native_message) { double('native message') } - let(:headers_ptr) { double('headers pointer') } - - describe '.from_native' do - before do - expect(Rdkafka::Bindings).to receive(:rd_kafka_message_headers).with(native_message, anything) do |_, headers_ptrptr| - expect(headers_ptrptr).to receive(:read_pointer).and_return(headers_ptr) - Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR - end - - expect(Rdkafka::Bindings).to \ - receive(:rd_kafka_header_get_all) - .with(headers_ptr, 0, anything, anything, anything) do |_, _, name_ptrptr, value_ptrptr, size_ptr| - expect(name_ptrptr).to receive(:read_pointer).and_return(double("pointer 0", read_string_to_null: headers.keys[0])) - expect(size_ptr).to receive(:[]).with(:value).and_return(headers.keys[0].size) - expect(value_ptrptr).to receive(:read_pointer).and_return(double("value pointer 0", read_string: headers.values[0])) - Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR - end - - expect(Rdkafka::Bindings).to \ - receive(:rd_kafka_header_get_all) - .with(headers_ptr, 1, anything, anything, anything) do |_, _, name_ptrptr, value_ptrptr, size_ptr| - expect(name_ptrptr).to receive(:read_pointer).and_return(double("pointer 1", read_string_to_null: headers.keys[1])) - expect(size_ptr).to receive(:[]).with(:value).and_return(headers.keys[1].size) - expect(value_ptrptr).to receive(:read_pointer).and_return(double("value pointer 1", read_string: headers.values[1])) - Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR - end - - expect(Rdkafka::Bindings).to \ - receive(:rd_kafka_header_get_all) - .with(headers_ptr, 2, anything, anything, anything) - .and_return(Rdkafka::Bindings::RD_KAFKA_RESP_ERR__NOENT) - end - - subject { described_class.from_native(native_message) } - - it { is_expected.to eq(headers) } - it { is_expected.to be_frozen } - - it 'allows String key' do - expect(subject['version']).to eq("2.1.3") - end - - it 'does not support symbols mappings' do - expect(subject.key?(:version)).to eq(false) - end - end -end diff --git a/spec/rdkafka/consumer/message_spec.rb b/spec/rdkafka/consumer/message_spec.rb index 526cba5c..86b22933 100644 --- a/spec/rdkafka/consumer/message_spec.rb +++ b/spec/rdkafka/consumer/message_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Consumer::Message do let(:native_client) { new_native_client } diff --git a/spec/rdkafka/consumer/partition_spec.rb b/spec/rdkafka/consumer/partition_spec.rb index cc890b05..928deac7 100644 --- a/spec/rdkafka/consumer/partition_spec.rb +++ b/spec/rdkafka/consumer/partition_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Consumer::Partition do let(:offset) { 100 } diff --git a/spec/rdkafka/consumer/topic_partition_list_spec.rb b/spec/rdkafka/consumer/topic_partition_list_spec.rb index a3f4882f..e745cef7 100644 --- a/spec/rdkafka/consumer/topic_partition_list_spec.rb +++ b/spec/rdkafka/consumer/topic_partition_list_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Consumer::TopicPartitionList do it "should create a new list and add unassigned topics" do @@ -219,24 +219,5 @@ expect(list).to eq other end - - it "should create a native list with timetamp offsets if offsets are Time" do - list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic_and_partitions_with_offsets("topic", 0 => Time.at(1505069646, 250_000)) - end - - tpl = list.to_native_tpl - - compare_list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic_and_partitions_with_offsets( - "topic", - 0 => (Time.at(1505069646, 250_000).to_f * 1000).floor - ) - end - - native_list = Rdkafka::Consumer::TopicPartitionList.from_native_tpl(tpl) - - expect(native_list).to eq compare_list - end end end diff --git a/spec/rdkafka/consumer_spec.rb b/spec/rdkafka/consumer_spec.rb index e5da460d..86023bc4 100644 --- a/spec/rdkafka/consumer_spec.rb +++ b/spec/rdkafka/consumer_spec.rb @@ -1,32 +1,15 @@ -# frozen_string_literal: true - +require "spec_helper" require "ostruct" require 'securerandom' describe Rdkafka::Consumer do - let(:consumer) { rdkafka_consumer_config.consumer } - let(:producer) { rdkafka_producer_config.producer } + let(:config) { rdkafka_config } + let(:consumer) { config.consumer } + let(:producer) { config.producer } after { consumer.close } after { producer.close } - describe '#name' do - it { expect(consumer.name).to include('rdkafka#consumer-') } - end - - describe 'consumer without auto-start' do - let(:consumer) { rdkafka_consumer_config.consumer(native_kafka_auto_start: false) } - - it 'expect to be able to start it later and close' do - consumer.start - consumer.close - end - - it 'expect to be able to close it without starting' do - consumer.close - end - end - describe "#subscribe, #unsubscribe and #subscription" do it "should subscribe, unsubscribe and return the subscription" do expect(consumer.subscription).to be_empty @@ -67,35 +50,11 @@ consumer.subscription }.to raise_error(Rdkafka::RdkafkaError) end - - context "when using consumer without the poll set" do - let(:consumer) do - config = rdkafka_consumer_config - config.consumer_poll_set = false - config.consumer - end - - it "should subscribe, unsubscribe and return the subscription" do - expect(consumer.subscription).to be_empty - - consumer.subscribe("consume_test_topic") - - expect(consumer.subscription).not_to be_empty - expected_subscription = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic("consume_test_topic") - end - expect(consumer.subscription).to eq expected_subscription - - consumer.unsubscribe - - expect(consumer.subscription).to be_empty - end - end end describe "#pause and #resume" do context "subscription" do - let(:timeout) { 2000 } + let(:timeout) { 1000 } before { consumer.subscribe("consume_test_topic") } after { consumer.unsubscribe } @@ -224,11 +183,6 @@ def send_one_message(val) # 7. ensure same message is read again message2 = consumer.poll(timeout) - - # This is needed because `enable.auto.offset.store` is true but when running in CI that - # is overloaded, offset store lags - sleep(2) - consumer.commit expect(message1.offset).to eq message2.offset expect(message1.payload).to eq message2.payload @@ -258,95 +212,6 @@ def send_one_message(val) end end - describe "#seek_by" do - let(:topic) { "consume_test_topic" } - let(:partition) { 0 } - let(:offset) { 0 } - - it "should raise an error when seeking fails" do - expect(Rdkafka::Bindings).to receive(:rd_kafka_seek).and_return(20) - expect { - consumer.seek_by(topic, partition, offset) - }.to raise_error Rdkafka::RdkafkaError - end - - context "subscription" do - let(:timeout) { 1000 } - - before do - consumer.subscribe(topic) - - # 1. partitions are assigned - wait_for_assignment(consumer) - expect(consumer.assignment).not_to be_empty - - # 2. eat unrelated messages - while(consumer.poll(timeout)) do; end - end - after { consumer.unsubscribe } - - def send_one_message(val) - producer.produce( - topic: topic, - payload: "payload #{val}", - key: "key 1", - partition: 0 - ).wait - end - - it "works when a partition is paused" do - # 3. get reference message - send_one_message(:a) - message1 = consumer.poll(timeout) - expect(message1&.payload).to eq "payload a" - - # 4. pause the subscription - tpl = Rdkafka::Consumer::TopicPartitionList.new - tpl.add_topic(topic, 1) - consumer.pause(tpl) - - # 5. seek by the previous message fields - consumer.seek_by(message1.topic, message1.partition, message1.offset) - - # 6. resume the subscription - tpl = Rdkafka::Consumer::TopicPartitionList.new - tpl.add_topic(topic, 1) - consumer.resume(tpl) - - # 7. ensure same message is read again - message2 = consumer.poll(timeout) - - # This is needed because `enable.auto.offset.store` is true but when running in CI that - # is overloaded, offset store lags - sleep(2) - - consumer.commit - expect(message1.offset).to eq message2.offset - expect(message1.payload).to eq message2.payload - end - - it "allows skipping messages" do - # 3. send messages - send_one_message(:a) - send_one_message(:b) - send_one_message(:c) - - # 4. get reference message - message = consumer.poll(timeout) - expect(message&.payload).to eq "payload a" - - # 5. seek over one message - consumer.seek_by(message.topic, message.partition, message.offset + 2) - - # 6. ensure that only one message is available - records = consumer.poll(timeout) - expect(records&.payload).to eq "payload c" - records = consumer.poll(timeout) - expect(records).to be_nil - end - end - end - describe "#assign and #assignment" do it "should return an empty assignment if nothing is assigned" do expect(consumer.assignment).to be_empty @@ -377,7 +242,7 @@ def send_one_message(val) it "should return the assignment when subscribed" do # Make sure there's a message - producer.produce( + report = producer.produce( topic: "consume_test_topic", payload: "payload 1", key: "key 1", @@ -404,33 +269,11 @@ def send_one_message(val) end end - describe '#assignment_lost?' do - it "should not return true as we do have an assignment" do - consumer.subscribe("consume_test_topic") - expected_subscription = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic("consume_test_topic") - end - - expect(consumer.assignment_lost?).to eq false - consumer.unsubscribe - end - - it "should not return true after voluntary unsubscribing" do - consumer.subscribe("consume_test_topic") - expected_subscription = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic("consume_test_topic") - end - - consumer.unsubscribe - expect(consumer.assignment_lost?).to eq false - end - end - describe "#close" do it "should close a consumer" do consumer.subscribe("consume_test_topic") 100.times do |i| - producer.produce( + report = producer.produce( topic: "consume_test_topic", payload: "payload #{i}", key: "key #{i}", @@ -442,36 +285,12 @@ def send_one_message(val) consumer.poll(100) }.to raise_error(Rdkafka::ClosedConsumerError, /poll/) end - - context 'when there are outgoing operations in other threads' do - it 'should wait and not crash' do - times = [] - - # Run a long running poll - thread = Thread.new do - times << Time.now - consumer.subscribe("empty_test_topic") - times << Time.now - consumer.poll(1_000) - times << Time.now - end - - # Make sure it starts before we close - sleep(0.1) - consumer.close - close_time = Time.now - thread.join - - times.each { |op_time| expect(op_time).to be < close_time } - end - end end - - describe "#position, #commit, #committed and #store_offset" do - # Make sure there are messages to work with + describe "#commit, #committed and #store_offset" do + # Make sure there's a stored offset let!(:report) do - producer.produce( + report = producer.produce( topic: "consume_test_topic", payload: "payload 1", key: "key 1", @@ -487,26 +306,22 @@ def send_one_message(val) ) end - describe "#position" do - it "should only accept a topic partition list in position if not nil" do - expect { - consumer.position("list") - }.to raise_error TypeError - end + it "should only accept a topic partition list in committed" do + expect { + consumer.committed("list") + }.to raise_error TypeError end - describe "#committed" do - it "should only accept a topic partition list in commit if not nil" do - expect { - consumer.commit("list") - }.to raise_error TypeError - end + it "should commit in sync mode" do + expect { + consumer.commit(nil, true) + }.not_to raise_error + end - it "should commit in sync mode" do - expect { - consumer.commit(nil, true) - }.not_to raise_error - end + it "should only accept a topic partition list in commit if not nil" do + expect { + consumer.commit("list") + }.to raise_error TypeError end context "with a committed consumer" do @@ -557,43 +372,39 @@ def send_one_message(val) }.to raise_error(Rdkafka::RdkafkaError) end - describe "#committed" do - it "should fetch the committed offsets for the current assignment" do - partitions = consumer.committed.to_h["consume_test_topic"] - expect(partitions).not_to be_nil - expect(partitions[0].offset).to eq 1 - end + it "should fetch the committed offsets for the current assignment" do + partitions = consumer.committed.to_h["consume_test_topic"] + expect(partitions).not_to be_nil + expect(partitions[0].offset).to eq 1 + end - it "should fetch the committed offsets for a specified topic partition list" do - list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic("consume_test_topic", [0, 1, 2]) - end - partitions = consumer.committed(list).to_h["consume_test_topic"] - expect(partitions).not_to be_nil - expect(partitions[0].offset).to eq 1 - expect(partitions[1].offset).to eq 1 - expect(partitions[2].offset).to eq 1 + it "should fetch the committed offsets for a specified topic partition list" do + list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| + list.add_topic("consume_test_topic", [0, 1, 2]) end + partitions = consumer.committed(list).to_h["consume_test_topic"] + expect(partitions).not_to be_nil + expect(partitions[0].offset).to eq 1 + expect(partitions[1].offset).to eq 1 + expect(partitions[2].offset).to eq 1 + end - it "should raise an error when getting committed fails" do - expect(Rdkafka::Bindings).to receive(:rd_kafka_committed).and_return(20) - list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic("consume_test_topic", [0, 1, 2]) - end - expect { - consumer.committed(list) - }.to raise_error Rdkafka::RdkafkaError + it "should raise an error when getting committed fails" do + expect(Rdkafka::Bindings).to receive(:rd_kafka_committed).and_return(20) + list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| + list.add_topic("consume_test_topic", [0, 1, 2]) end + expect { + consumer.committed(list) + }.to raise_error Rdkafka::RdkafkaError end describe "#store_offset" do - let(:consumer) { rdkafka_consumer_config('enable.auto.offset.store': false).consumer } - before do config = {} config[:'enable.auto.offset.store'] = false config[:'enable.auto.commit'] = false - @new_consumer = rdkafka_consumer_config(config).consumer + @new_consumer = rdkafka_config(config).consumer @new_consumer.subscribe("consume_test_topic") wait_for_assignment(@new_consumer) end @@ -606,8 +417,6 @@ def send_one_message(val) @new_consumer.store_offset(message) @new_consumer.commit - # TODO use position here, should be at offset - list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| list.add_topic("consume_test_topic", [0, 1, 2]) end @@ -622,43 +431,6 @@ def send_one_message(val) @new_consumer.store_offset(message) }.to raise_error Rdkafka::RdkafkaError end - - describe "#position" do - it "should fetch the positions for the current assignment" do - consumer.store_offset(message) - - partitions = consumer.position.to_h["consume_test_topic"] - expect(partitions).not_to be_nil - expect(partitions[0].offset).to eq message.offset + 1 - end - - it "should fetch the positions for a specified assignment" do - consumer.store_offset(message) - - list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic_and_partitions_with_offsets("consume_test_topic", 0 => nil, 1 => nil, 2 => nil) - end - partitions = consumer.position(list).to_h["consume_test_topic"] - expect(partitions).not_to be_nil - expect(partitions[0].offset).to eq message.offset + 1 - end - - it "should raise an error when getting the position fails" do - expect(Rdkafka::Bindings).to receive(:rd_kafka_position).and_return(20) - - expect { - consumer.position - }.to raise_error(Rdkafka::RdkafkaError) - end - end - - context "when trying to use with enable.auto.offset.store set to true" do - let(:consumer) { rdkafka_consumer_config('enable.auto.offset.store': true).consumer } - - it "expect to raise invalid configuration error" do - expect { consumer.store_offset(message) }.to raise_error(Rdkafka::RdkafkaError, /invalid_arg/) - end - end end end end @@ -687,13 +459,13 @@ def send_one_message(val) end describe "#lag" do - let(:consumer) { rdkafka_consumer_config(:"enable.partition.eof" => true).consumer } + let(:config) { rdkafka_config(:"enable.partition.eof" => true) } it "should calculate the consumer lag" do # Make sure there's a message in every partition and # wait for the message to make sure everything is committed. (0..2).each do |i| - producer.produce( + report = producer.produce( topic: "consume_test_topic", key: "key lag #{i}", partition: i @@ -736,7 +508,7 @@ def send_one_message(val) # Produce message on every topic again (0..2).each do |i| - producer.produce( + report = producer.produce( topic: "consume_test_topic", key: "key lag #{i}", partition: i @@ -822,7 +594,7 @@ def send_one_message(val) end describe "#poll with headers" do - it "should return message with headers using string keys (when produced with symbol keys)" do + it "should return message with headers" do report = producer.produce( topic: "consume_test_topic", key: "key headers", @@ -832,20 +604,7 @@ def send_one_message(val) message = wait_for_message(topic: "consume_test_topic", consumer: consumer, delivery_report: report) expect(message).to be expect(message.key).to eq('key headers') - expect(message.headers).to include('foo' => 'bar') - end - - it "should return message with headers using string keys (when produced with string keys)" do - report = producer.produce( - topic: "consume_test_topic", - key: "key headers", - headers: { 'foo' => 'bar' } - ).wait - - message = wait_for_message(topic: "consume_test_topic", consumer: consumer, delivery_report: report) - expect(message).to be - expect(message.key).to eq('key headers') - expect(message.headers).to include('foo' => 'bar') + expect(message.headers).to include(foo: 'bar') end it "should return message with no headers" do @@ -940,7 +699,7 @@ def produce_n(n) n.times do |i| handles << producer.produce( topic: topic_name, - payload: i % 10 == 0 ? nil : Time.new.to_f.to_s, + payload: Time.new.to_f.to_s, key: i.to_s, partition: 0 ) @@ -965,8 +724,7 @@ def new_message # # This is, in effect, an integration test and the subsequent specs are # unit tests. - admin = rdkafka_config.admin - create_topic_handle = admin.create_topic(topic_name, 1, 1) + create_topic_handle = rdkafka_config.admin.create_topic(topic_name, 1, 1) create_topic_handle.wait(max_wait_timeout: 15.0) consumer.subscribe(topic_name) produce_n 42 @@ -979,7 +737,6 @@ def new_message expect(all_yields.flatten.size).to eq 42 expect(all_yields.size).to be > 4 expect(all_yields.flatten.map(&:key)).to eq (0..41).map { |x| x.to_s } - admin.close end it "should batch poll results and yield arrays of messages" do @@ -1022,15 +779,13 @@ def new_message end it "should yield [] if nothing is received before the timeout" do - admin = rdkafka_config.admin - create_topic_handle = admin.create_topic(topic_name, 1, 1) + create_topic_handle = rdkafka_config.admin.create_topic(topic_name, 1, 1) create_topic_handle.wait(max_wait_timeout: 15.0) consumer.subscribe(topic_name) consumer.each_batch do |batch| expect(batch).to eq([]) break end - admin.close end it "should yield batchs of max_items in size if messages are already fetched" do @@ -1069,14 +824,11 @@ def new_message context "error raised from poll and yield_on_error is true" do it "should yield buffered exceptions on rebalance, then break" do - config = rdkafka_consumer_config( - { - :"enable.auto.commit" => false, - :"enable.auto.offset.store" => false - } - ) + config = rdkafka_config({:"enable.auto.commit" => false, + :"enable.auto.offset.store" => false }) consumer = config.consumer consumer.subscribe(topic_name) + loop_count = 0 batches_yielded = [] exceptions_yielded = [] each_batch_iterations = 0 @@ -1107,20 +859,16 @@ def new_message expect(batches_yielded.first.size).to eq 2 expect(exceptions_yielded.flatten.size).to eq 1 expect(exceptions_yielded.flatten.first).to be_instance_of(Rdkafka::RdkafkaError) - consumer.close end end context "error raised from poll and yield_on_error is false" do it "should yield buffered exceptions on rebalance, then break" do - config = rdkafka_consumer_config( - { - :"enable.auto.commit" => false, - :"enable.auto.offset.store" => false - } - ) + config = rdkafka_config({:"enable.auto.commit" => false, + :"enable.auto.offset.store" => false }) consumer = config.consumer consumer.subscribe(topic_name) + loop_count = 0 batches_yielded = [] exceptions_yielded = [] each_batch_iterations = 0 @@ -1149,162 +897,64 @@ def new_message expect(each_batch_iterations).to eq 0 expect(batches_yielded.size).to eq 0 expect(exceptions_yielded.size).to eq 0 - consumer.close end end end - describe "#offsets_for_times" do - it "should raise when not TopicPartitionList" do - expect { consumer.offsets_for_times([]) }.to raise_error(TypeError) - end - - it "should raise an error when offsets_for_times fails" do - tpl = Rdkafka::Consumer::TopicPartitionList.new - - expect(Rdkafka::Bindings).to receive(:rd_kafka_offsets_for_times).and_return(7) - - expect { consumer.offsets_for_times(tpl) }.to raise_error(Rdkafka::RdkafkaError) - end - - context "when subscribed" do - let(:timeout) { 1000 } - - before do - consumer.subscribe("consume_test_topic") - - # 1. partitions are assigned - wait_for_assignment(consumer) - expect(consumer.assignment).not_to be_empty - - # 2. eat unrelated messages - while(consumer.poll(timeout)) do; end - end - - after { consumer.unsubscribe } - - def send_one_message(val) - producer.produce( - topic: "consume_test_topic", - payload: "payload #{val}", - key: "key 0", - partition: 0 - ).wait - end - - it "returns a TopicParticionList with updated offsets" do - send_one_message("a") - send_one_message("b") - send_one_message("c") - - consumer.poll(timeout) - message = consumer.poll(timeout) - consumer.poll(timeout) - - tpl = Rdkafka::Consumer::TopicPartitionList.new.tap do |list| - list.add_topic_and_partitions_with_offsets( - "consume_test_topic", - [ - [0, message.timestamp] - ] - ) + describe "a rebalance listener" do + it "should get notifications" do + listener = Struct.new(:queue) do + def on_partitions_assigned(consumer, list) + collect(:assign, list) end - tpl_response = consumer.offsets_for_times(tpl) - - expect(tpl_response.to_h["consume_test_topic"][0].offset).to eq message.offset - end - end - end - - # Only relevant in case of a consumer with separate queues - describe '#events_poll' do - let(:stats) { [] } + def on_partitions_revoked(consumer, list) + collect(:revoke, list) + end - before { Rdkafka::Config.statistics_callback = ->(published) { stats << published } } + def collect(name, list) + partitions = list.to_h.map { |key, values| [key, values.map(&:partition)] }.flatten + queue << ([name] + partitions) + end + end.new([]) - after { Rdkafka::Config.statistics_callback = nil } + notify_listener(listener) - let(:consumer) do - config = rdkafka_consumer_config('statistics.interval.ms': 100) - config.consumer_poll_set = false - config.consumer + expect(listener.queue).to eq([ + [:assign, "consume_test_topic", 0, 1, 2], + [:revoke, "consume_test_topic", 0, 1, 2] + ]) end - it "expect to run events_poll, operate and propagate stats on events_poll and not poll" do - consumer.subscribe("consume_test_topic") - consumer.poll(1_000) - expect(stats).to be_empty - consumer.events_poll(-1) - expect(stats).not_to be_empty - end - end + it 'should handle callback exceptions' do + listener = Struct.new(:queue) do + def on_partitions_assigned(consumer, list) + queue << :assigned + raise 'boom' + end - describe '#consumer_group_metadata_pointer' do - let(:pointer) { consumer.consumer_group_metadata_pointer } + def on_partitions_revoked(consumer, list) + queue << :revoked + raise 'boom' + end + end.new([]) - after { Rdkafka::Bindings.rd_kafka_consumer_group_metadata_destroy(pointer) } + notify_listener(listener) - it 'expect to return a pointer' do - expect(pointer).to be_a(FFI::Pointer) + expect(listener.queue).to eq([:assigned, :revoked]) end - end - describe "a rebalance listener" do - let(:consumer) do - config = rdkafka_consumer_config + def notify_listener(listener) + # 1. subscribe and poll config.consumer_rebalance_listener = listener - config.consumer - end - - context "with a working listener" do - let(:listener) do - Struct.new(:queue) do - def on_partitions_assigned(list) - collect(:assign, list) - end - - def on_partitions_revoked(list) - collect(:revoke, list) - end - - def collect(name, list) - partitions = list.to_h.map { |key, values| [key, values.map(&:partition)] }.flatten - queue << ([name] + partitions) - end - end.new([]) - end - - it "should get notifications" do - notify_listener(listener) - - expect(listener.queue).to eq([ - [:assign, "consume_test_topic", 0, 1, 2], - [:revoke, "consume_test_topic", 0, 1, 2] - ]) - end - end - - context "with a broken listener" do - let(:listener) do - Struct.new(:queue) do - def on_partitions_assigned(list) - queue << :assigned - raise 'boom' - end - - def on_partitions_revoked(list) - queue << :revoked - raise 'boom' - end - end.new([]) - end - - it 'should handle callback exceptions' do - notify_listener(listener) + consumer.subscribe("consume_test_topic") + wait_for_assignment(consumer) + consumer.poll(100) - expect(listener.queue).to eq([:assigned, :revoked]) - end + # 2. unsubscribe + consumer.unsubscribe + wait_for_unassignment(consumer) + consumer.close end end @@ -1324,7 +974,7 @@ def on_partitions_revoked(list) :assign => [ nil ], :assignment => nil, :committed => [], - :query_watermark_offsets => [ nil, nil ] + :query_watermark_offsets => [ nil, nil ], }.each do |method, args| it "raises an exception if #{method} is called" do expect { @@ -1337,106 +987,4 @@ def on_partitions_revoked(list) end end end - - it "provides a finalizer that closes the native kafka client" do - expect(consumer.closed?).to eq(false) - - consumer.finalizer.call("some-ignored-object-id") - - expect(consumer.closed?).to eq(true) - end - - context "when the rebalance protocol is cooperative" do - let(:consumer) do - config = rdkafka_consumer_config( - { - :"partition.assignment.strategy" => "cooperative-sticky", - :"debug" => "consumer", - } - ) - config.consumer_rebalance_listener = listener - config.consumer - end - - let(:listener) do - Struct.new(:queue) do - def on_partitions_assigned(list) - collect(:assign, list) - end - - def on_partitions_revoked(list) - collect(:revoke, list) - end - - def collect(name, list) - partitions = list.to_h.map { |key, values| [key, values.map(&:partition)] }.flatten - queue << ([name] + partitions) - end - end.new([]) - end - - it "should be able to assign and unassign partitions using the cooperative partition assignment APIs" do - notify_listener(listener) do - handles = [] - 10.times do - handles << producer.produce( - topic: "consume_test_topic", - payload: "payload 1", - key: "key 1", - partition: 0 - ) - end - handles.each(&:wait) - - consumer.subscribe("consume_test_topic") - # Check the first 10 messages. Then close the consumer, which - # should break the each loop. - consumer.each_with_index do |message, i| - expect(message).to be_a Rdkafka::Consumer::Message - break if i == 10 - end - end - - expect(listener.queue).to eq([ - [:assign, "consume_test_topic", 0, 1, 2], - [:revoke, "consume_test_topic", 0, 1, 2] - ]) - end - end - - describe '#oauthbearer_set_token' do - context 'when sasl not configured' do - it 'should return RD_KAFKA_RESP_ERR__STATE' do - response = consumer.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(Rdkafka::Bindings::RD_KAFKA_RESP_ERR__STATE) - end - end - - context 'when sasl configured' do - before do - $consumer_sasl = rdkafka_producer_config( - "security.protocol": "sasl_ssl", - "sasl.mechanisms": 'OAUTHBEARER' - ).consumer - end - - after do - $consumer_sasl.close - end - - it 'should succeed' do - - response = $consumer_sasl.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(0) - end - end - end end diff --git a/spec/rdkafka/error_spec.rb b/spec/rdkafka/error_spec.rb index 3f1567e3..8f80a8d5 100644 --- a/spec/rdkafka/error_spec.rb +++ b/spec/rdkafka/error_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::RdkafkaError do it "should raise a type error for a nil response" do diff --git a/spec/rdkafka/metadata_spec.rb b/spec/rdkafka/metadata_spec.rb index 1462d1ec..968905ba 100644 --- a/spec/rdkafka/metadata_spec.rb +++ b/spec/rdkafka/metadata_spec.rb @@ -1,9 +1,8 @@ -# frozen_string_literal: true - +require "spec_helper" require "securerandom" describe Rdkafka::Metadata do - let(:config) { rdkafka_consumer_config } + let(:config) { rdkafka_config } let(:native_config) { config.send(:native_config) } let(:native_kafka) { config.send(:native_kafka, native_config, :rd_kafka_consumer) } @@ -30,7 +29,7 @@ it "#brokers returns our single broker" do expect(subject.brokers.length).to eq(1) expect(subject.brokers[0][:broker_id]).to eq(1) - expect(subject.brokers[0][:broker_name]).to eq("127.0.0.1") + expect(subject.brokers[0][:broker_name]).to eq("localhost") expect(subject.brokers[0][:broker_port]).to eq(9092) end @@ -53,7 +52,7 @@ it "#brokers returns our single broker" do expect(subject.brokers.length).to eq(1) expect(subject.brokers[0][:broker_id]).to eq(1) - expect(subject.brokers[0][:broker_name]).to eq("127.0.0.1") + expect(subject.brokers[0][:broker_name]).to eq("localhost") expect(subject.brokers[0][:broker_port]).to eq(9092) end diff --git a/spec/rdkafka/native_kafka_spec.rb b/spec/rdkafka/native_kafka_spec.rb deleted file mode 100644 index 089aa810..00000000 --- a/spec/rdkafka/native_kafka_spec.rb +++ /dev/null @@ -1,130 +0,0 @@ -# frozen_string_literal: true - -describe Rdkafka::NativeKafka do - let(:config) { rdkafka_producer_config } - let(:native) { config.send(:native_kafka, config.send(:native_config), :rd_kafka_producer) } - let(:closing) { false } - let(:thread) { double(Thread) } - let(:opaque) { Rdkafka::Opaque.new } - - subject(:client) { described_class.new(native, run_polling_thread: true, opaque: opaque) } - - before do - allow(Rdkafka::Bindings).to receive(:rd_kafka_name).and_return('producer-1') - allow(Thread).to receive(:new).and_return(thread) - allow(thread).to receive(:name=).with("rdkafka.native_kafka#producer-1") - allow(thread).to receive(:[]=).with(:closing, anything) - allow(thread).to receive(:join) - allow(thread).to receive(:abort_on_exception=).with(anything) - end - - after { client.close } - - context "defaults" do - it "sets the thread name" do - expect(thread).to receive(:name=).with("rdkafka.native_kafka#producer-1") - - client - end - - it "sets the thread to abort on exception" do - expect(thread).to receive(:abort_on_exception=).with(true) - - client - end - - it "sets the thread `closing` flag to false" do - expect(thread).to receive(:[]=).with(:closing, false) - - client - end - end - - context "the polling thread" do - it "is created" do - expect(Thread).to receive(:new) - - client - end - end - - it "exposes the inner client" do - client.with_inner do |inner| - expect(inner).to eq(native) - end - end - - context "when client was not yet closed (`nil`)" do - it "is not closed" do - expect(client.closed?).to eq(false) - end - - context "and attempt to close" do - it "calls the `destroy` binding" do - expect(Rdkafka::Bindings).to receive(:rd_kafka_destroy).with(native).and_call_original - - client.close - end - - it "indicates to the polling thread that it is closing" do - expect(thread).to receive(:[]=).with(:closing, true) - - client.close - end - - it "joins the polling thread" do - expect(thread).to receive(:join) - - client.close - end - - it "closes and unassign the native client" do - client.close - - expect(client.closed?).to eq(true) - end - end - end - - context "when client was already closed" do - before { client.close } - - it "is closed" do - expect(client.closed?).to eq(true) - end - - context "and attempt to close again" do - it "does not call the `destroy` binding" do - expect(Rdkafka::Bindings).not_to receive(:rd_kafka_destroy_flags) - - client.close - end - - it "does not indicate to the polling thread that it is closing" do - expect(thread).not_to receive(:[]=).with(:closing, true) - - client.close - end - - it "does not join the polling thread" do - expect(thread).not_to receive(:join) - - client.close - end - - it "does not close and unassign the native client again" do - client.close - - expect(client.closed?).to eq(true) - end - end - end - - it "provides a finalizer that closes the native kafka client" do - expect(client.closed?).to eq(false) - - client.finalizer.call("some-ignored-object-id") - - expect(client.closed?).to eq(true) - end -end diff --git a/spec/rdkafka/producer/delivery_handle_spec.rb b/spec/rdkafka/producer/delivery_handle_spec.rb index b9095a24..20b9f3b5 100644 --- a/spec/rdkafka/producer/delivery_handle_spec.rb +++ b/spec/rdkafka/producer/delivery_handle_spec.rb @@ -1,4 +1,4 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Producer::DeliveryHandle do let(:response) { 0 } @@ -9,7 +9,6 @@ handle[:response] = response handle[:partition] = 2 handle[:offset] = 100 - handle.topic = "produce_test_topic" end end @@ -30,7 +29,6 @@ expect(report.partition).to eq(2) expect(report.offset).to eq(100) - expect(report.topic_name).to eq("produce_test_topic") end it "should wait without a timeout" do @@ -38,7 +36,6 @@ expect(report.partition).to eq(2) expect(report.offset).to eq(100) - expect(report.topic_name).to eq("produce_test_topic") end end end diff --git a/spec/rdkafka/producer/delivery_report_spec.rb b/spec/rdkafka/producer/delivery_report_spec.rb index 6808f740..3db11253 100644 --- a/spec/rdkafka/producer/delivery_report_spec.rb +++ b/spec/rdkafka/producer/delivery_report_spec.rb @@ -1,7 +1,7 @@ -# frozen_string_literal: true +require "spec_helper" describe Rdkafka::Producer::DeliveryReport do - subject { Rdkafka::Producer::DeliveryReport.new(2, 100, "topic", -1) } + subject { Rdkafka::Producer::DeliveryReport.new(2, 100, "error") } it "should get the partition" do expect(subject.partition).to eq 2 @@ -11,15 +11,7 @@ expect(subject.offset).to eq 100 end - it "should get the topic_name" do - expect(subject.topic_name).to eq "topic" - end - - it "should get the same topic name under topic alias" do - expect(subject.topic).to eq "topic" - end - it "should get the error" do - expect(subject.error).to eq -1 + expect(subject.error).to eq "error" end end diff --git a/spec/rdkafka/producer_spec.rb b/spec/rdkafka/producer_spec.rb index d86f6895..8e8849d8 100644 --- a/spec/rdkafka/producer_spec.rb +++ b/spec/rdkafka/producer_spec.rb @@ -1,78 +1,16 @@ -# frozen_string_literal: true - -require "zlib" +require "spec_helper" describe Rdkafka::Producer do - let(:producer) { rdkafka_producer_config.producer } - let(:consumer) { rdkafka_consumer_config.consumer } + let(:producer) { rdkafka_config.producer } + let(:consumer) { rdkafka_config.consumer } after do # Registry should always end up being empty - registry = Rdkafka::Producer::DeliveryHandle::REGISTRY - expect(registry).to be_empty, registry.inspect + expect(Rdkafka::Producer::DeliveryHandle::REGISTRY).to be_empty producer.close consumer.close end - describe 'producer without auto-start' do - let(:producer) { rdkafka_producer_config.producer(native_kafka_auto_start: false) } - - it 'expect to be able to start it later and close' do - producer.start - producer.close - end - - it 'expect to be able to close it without starting' do - producer.close - end - end - - describe '#name' do - it { expect(producer.name).to include('rdkafka#producer-') } - end - - describe '#produce with topic config alterations' do - context 'when config is not valid' do - it 'expect to raise error' do - expect do - producer.produce(topic: 'test', payload: '', topic_config: { 'invalid': 'invalid' }) - end.to raise_error(Rdkafka::Config::ConfigError) - end - end - - context 'when config is valid' do - it 'expect to raise error' do - expect do - producer.produce(topic: 'test', payload: '', topic_config: { 'acks': 1 }).wait - end.not_to raise_error - end - - context 'when alteration should change behavior' do - # This is set incorrectly for a reason - # If alteration would not work, this will hang the spec suite - let(:producer) do - rdkafka_producer_config( - 'message.timeout.ms': 1_000_000, - :"bootstrap.servers" => "localhost:9094", - ).producer - end - - it 'expect to give up on delivery fast based on alteration config' do - expect do - producer.produce( - topic: 'produce_config_test', - payload: 'test', - topic_config: { - 'compression.type': 'gzip', - 'message.timeout.ms': 1 - } - ).wait - end.to raise_error(Rdkafka::RdkafkaError, /msg_timed_out/) - end - end - end - end - context "delivery callback" do context "with a proc/lambda" do it "should set the callback" do @@ -89,10 +27,8 @@ producer.delivery_callback = lambda do |report| expect(report).not_to be_nil - expect(report.label).to eq "label" expect(report.partition).to eq 1 expect(report.offset).to be >= 0 - expect(report.topic_name).to eq "produce_test_topic" @callback_called = true end @@ -100,12 +36,9 @@ handle = producer.produce( topic: "produce_test_topic", payload: "payload", - key: "key", - label: "label" + key: "key" ) - expect(handle.label).to eq "label" - # Wait for it to be delivered handle.wait(max_wait_timeout: 15) @@ -115,27 +48,6 @@ # Callback should have been called expect(@callback_called).to be true end - - it "should provide handle" do - @callback_handle = nil - - producer.delivery_callback = lambda { |_, handle| @callback_handle = handle } - - # Produce a message - handle = producer.produce( - topic: "produce_test_topic", - payload: "payload", - key: "key" - ) - - # Wait for it to be delivered - handle.wait(max_wait_timeout: 15) - - # Join the producer thread. - producer.close - - expect(handle).to be @callback_handle - end end context "with a callable object" do @@ -179,37 +91,6 @@ def call(report) expect(called_report.first).not_to be_nil expect(called_report.first.partition).to eq 1 expect(called_report.first.offset).to be >= 0 - expect(called_report.first.topic_name).to eq "produce_test_topic" - end - - it "should provide handle" do - callback_handles = [] - callback = Class.new do - def initialize(callback_handles) - @callback_handles = callback_handles - end - - def call(_, handle) - @callback_handles << handle - end - end - producer.delivery_callback = callback.new(callback_handles) - - # Produce a message - handle = producer.produce( - topic: "produce_test_topic", - payload: "payload", - key: "key" - ) - - # Wait for it to be delivered - handle.wait(max_wait_timeout: 15) - - # Join the producer thread. - producer.close - - # Callback should have been called - expect(handle).to be callback_handles.first end end @@ -234,13 +115,11 @@ def call(_, handle) handle = producer.produce( topic: "produce_test_topic", payload: "payload", - key: "key", - label: "label" + key: "key" ) # Should be pending at first expect(handle.pending?).to be true - expect(handle.label).to eq "label" # Check delivery handle and report report = handle.wait(max_wait_timeout: 5) @@ -248,13 +127,11 @@ def call(_, handle) expect(report).not_to be_nil expect(report.partition).to eq 1 expect(report.offset).to be >= 0 - expect(report.label).to eq "label" - # Flush and close producer - producer.flush + # Close producer producer.close - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -278,7 +155,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -322,28 +199,6 @@ def call(_, handle) expect(messages[2].key).to eq key end - it "should produce a message with empty string without crashing" do - messages = [{key: 'a', partition_key: ''}] - - messages = messages.map do |m| - handle = producer.produce( - topic: "partitioner_test_topic", - payload: "payload partition", - key: m[:key], - partition_key: m[:partition_key] - ) - report = handle.wait(max_wait_timeout: 5) - - wait_for_message( - topic: "partitioner_test_topic", - delivery_report: report, - ) - end - - expect(messages[0].partition).to eq 0 - expect(messages[0].key).to eq 'a' - end - it "should produce a message with utf-8 encoding" do handle = producer.produce( topic: "produce_test_topic", @@ -352,7 +207,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -385,7 +240,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -406,7 +261,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -426,7 +281,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -444,7 +299,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -464,7 +319,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -473,9 +328,9 @@ def call(_, handle) expect(message.payload).to eq "payload headers" expect(message.key).to eq "key headers" - expect(message.headers["foo"]).to eq "bar" - expect(message.headers["baz"]).to eq "foobar" - expect(message.headers["foobar"]).to be_nil + expect(message.headers[:foo]).to eq "bar" + expect(message.headers[:baz]).to eq "foobar" + expect(message.headers[:foobar]).to be_nil end it "should produce a message with empty headers" do @@ -487,7 +342,7 @@ def call(_, handle) ) report = handle.wait(max_wait_timeout: 5) - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -520,16 +375,20 @@ def call(_, handle) end end - it "should produce a message in a forked process", skip: defined?(JRUBY_VERSION) && "Kernel#fork is not available" do + it "should produce a message in a forked process" do # Fork, produce a message, send the report over a pipe and # wait for and check the message in the main process. + + # Kernel#fork is not available in JRuby + skip if defined?(JRUBY_VERSION) + reader, writer = IO.pipe - pid = fork do + fork do reader.close - # Avoid sharing the client between processes. - producer = rdkafka_producer_config.producer + # Avoids sharing the socket between processes. + producer = rdkafka_config.producer handle = producer.produce( topic: "produce_test_topic", @@ -541,28 +400,24 @@ def call(_, handle) report_json = JSON.generate( "partition" => report.partition, - "offset" => report.offset, - "topic_name" => report.topic_name + "offset" => report.offset ) writer.write(report_json) writer.close - producer.flush producer.close end - Process.wait(pid) writer.close report_hash = JSON.parse(reader.read) report = Rdkafka::Producer::DeliveryReport.new( report_hash["partition"], - report_hash["offset"], - report_hash["topic_name"] + report_hash["offset"] ) reader.close - # Consume message and verify its content + # Consume message and verify it's content message = wait_for_message( topic: "produce_test_topic", delivery_report: report, @@ -619,204 +474,4 @@ def call(_, handle) end end end - - context "when not being able to deliver the message" do - let(:producer) do - rdkafka_producer_config( - "bootstrap.servers": "localhost:9093", - "message.timeout.ms": 100 - ).producer - end - - it "should contain the error in the response when not deliverable" do - handler = producer.produce(topic: 'produce_test_topic', payload: nil, label: 'na') - # Wait for the async callbacks and delivery registry to update - sleep(2) - expect(handler.create_result.error).to be_a(Rdkafka::RdkafkaError) - expect(handler.create_result.label).to eq('na') - end - end - - describe '#partition_count' do - it { expect(producer.partition_count('consume_test_topic')).to eq(3) } - - context 'when the partition count value is already cached' do - before do - producer.partition_count('consume_test_topic') - allow(::Rdkafka::Metadata).to receive(:new).and_call_original - end - - it 'expect not to query it again' do - producer.partition_count('consume_test_topic') - expect(::Rdkafka::Metadata).not_to have_received(:new) - end - end - - context 'when the partition count value was cached but time expired' do - before do - allow(::Process).to receive(:clock_gettime).and_return(0, 30.02) - producer.partition_count('consume_test_topic') - allow(::Rdkafka::Metadata).to receive(:new).and_call_original - end - - it 'expect not to query it again' do - producer.partition_count('consume_test_topic') - expect(::Rdkafka::Metadata).to have_received(:new) - end - end - - context 'when the partition count value was cached and time did not expire' do - before do - allow(::Process).to receive(:clock_gettime).and_return(0, 29.001) - producer.partition_count('consume_test_topic') - allow(::Rdkafka::Metadata).to receive(:new).and_call_original - end - - it 'expect not to query it again' do - producer.partition_count('consume_test_topic') - expect(::Rdkafka::Metadata).not_to have_received(:new) - end - end - end - - describe '#flush' do - it "should return flush when it can flush all outstanding messages or when no messages" do - producer.produce( - topic: "produce_test_topic", - payload: "payload headers", - key: "key headers", - headers: {} - ) - - expect(producer.flush(5_000)).to eq(true) - end - - context 'when it cannot flush due to a timeout' do - let(:producer) do - rdkafka_producer_config( - "bootstrap.servers": "localhost:9093", - "message.timeout.ms": 2_000 - ).producer - end - - after do - # Allow rdkafka to evict message preventing memory-leak - sleep(2) - end - - it "should return false on flush when cannot deliver and beyond timeout" do - producer.produce( - topic: "produce_test_topic", - payload: "payload headers", - key: "key headers", - headers: {} - ) - - expect(producer.flush(1_000)).to eq(false) - end - end - - context 'when there is a different error' do - before { allow(Rdkafka::Bindings).to receive(:rd_kafka_flush).and_return(-199) } - - it 'should raise it' do - expect { producer.flush }.to raise_error(Rdkafka::RdkafkaError) - end - end - end - - describe '#purge' do - context 'when no outgoing messages' do - it { expect(producer.purge).to eq(true) } - end - - context 'when librdkafka purge returns an error' do - before { expect(Rdkafka::Bindings).to receive(:rd_kafka_purge).and_return(-153) } - - it 'expect to raise an error' do - expect { producer.purge }.to raise_error(Rdkafka::RdkafkaError, /retry/) - end - end - - context 'when there are outgoing things in the queue' do - let(:producer) do - rdkafka_producer_config( - "bootstrap.servers": "localhost:9093", - "message.timeout.ms": 2_000 - ).producer - end - - it "should should purge and move forward" do - producer.produce( - topic: "produce_test_topic", - payload: "payload headers" - ) - - expect(producer.purge).to eq(true) - expect(producer.flush(1_000)).to eq(true) - end - - it "should materialize the delivery handles" do - handle = producer.produce( - topic: "produce_test_topic", - payload: "payload headers" - ) - - expect(producer.purge).to eq(true) - - expect { handle.wait }.to raise_error(Rdkafka::RdkafkaError, /purge_queue/) - end - - context "when using delivery_callback" do - let(:delivery_reports) { [] } - - let(:delivery_callback) do - ->(delivery_report) { delivery_reports << delivery_report } - end - - before { producer.delivery_callback = delivery_callback } - - it "should run the callback" do - handle = producer.produce( - topic: "produce_test_topic", - payload: "payload headers" - ) - - expect(producer.purge).to eq(true) - # queue purge - expect(delivery_reports[0].error).to eq(-152) - end - end - end - end - - describe '#oauthbearer_set_token' do - context 'when sasl not configured' do - it 'should return RD_KAFKA_RESP_ERR__STATE' do - response = producer.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(Rdkafka::Bindings::RD_KAFKA_RESP_ERR__STATE) - end - end - - context 'when sasl configured' do - it 'should succeed' do - producer_sasl = rdkafka_producer_config( - { - "security.protocol": "sasl_ssl", - "sasl.mechanisms": 'OAUTHBEARER' - } - ).producer - response = producer_sasl.oauthbearer_set_token( - token: "foo", - lifetime_ms: Time.now.to_i*1000 + 900 * 1000, - principal_name: "kafka-cluster" - ) - expect(response).to eq(0) - end - end - end end diff --git a/spec/spec_helper.rb b/spec/spec_helper.rb index 0f2a02f3..4ad7d16e 100644 --- a/spec/spec_helper.rb +++ b/spec/spec_helper.rb @@ -1,5 +1,3 @@ -# frozen_string_literal: true - unless ENV["CI"] == "true" require "simplecov" SimpleCov.start do @@ -10,58 +8,27 @@ require "pry" require "rspec" require "rdkafka" -require "timeout" -require "securerandom" -def rdkafka_base_config - { +def rdkafka_config(config_overrides={}) + config = { :"api.version.request" => false, :"broker.version.fallback" => "1.0", :"bootstrap.servers" => "localhost:9092", + :"group.id" => "ruby-test-#{Random.new.rand(0..1_000_000)}", + :"auto.offset.reset" => "earliest", + :"enable.partition.eof" => false } -end - -def rdkafka_config(config_overrides={}) - # Generate the base config - config = rdkafka_base_config - # Merge overrides - config.merge!(config_overrides) - # Return it - Rdkafka::Config.new(config) -end - -def rdkafka_consumer_config(config_overrides={}) - # Generate the base config - config = rdkafka_base_config - # Add consumer specific fields to it - config[:"auto.offset.reset"] = "earliest" - config[:"enable.partition.eof"] = false - config[:"group.id"] = "ruby-test-#{SecureRandom.uuid}" - # Enable debug mode if required - if ENV["DEBUG_CONSUMER"] - config[:debug] = "cgrp,topic,fetch" - end - # Merge overrides - config.merge!(config_overrides) - # Return it - Rdkafka::Config.new(config) -end - -def rdkafka_producer_config(config_overrides={}) - # Generate the base config - config = rdkafka_base_config - # Enable debug mode if required if ENV["DEBUG_PRODUCER"] config[:debug] = "broker,topic,msg" + elsif ENV["DEBUG_CONSUMER"] + config[:debug] = "cgrp,topic,fetch" end - # Merge overrides config.merge!(config_overrides) - # Return it Rdkafka::Config.new(config) end def new_native_client - config = rdkafka_consumer_config + config = rdkafka_config config.send(:native_kafka, config.send(:native_config), :rd_kafka_producer) end @@ -74,8 +41,8 @@ def new_native_topic(topic_name="topic_name", native_client: ) end def wait_for_message(topic:, delivery_report:, timeout_in_seconds: 30, consumer: nil) - new_consumer = consumer.nil? - consumer ||= rdkafka_consumer_config.consumer + new_consumer = !!consumer + consumer ||= rdkafka_config.consumer consumer.subscribe(topic) timeout = Time.now.to_i + timeout_in_seconds loop do @@ -107,24 +74,7 @@ def wait_for_unassignment(consumer) end end -def notify_listener(listener, &block) - # 1. subscribe and poll - consumer.subscribe("consume_test_topic") - wait_for_assignment(consumer) - consumer.poll(100) - - block.call if block - - # 2. unsubscribe - consumer.unsubscribe - wait_for_unassignment(consumer) - consumer.close -end - RSpec.configure do |config| - config.filter_run focus: true - config.run_all_when_everything_filtered = true - config.before(:suite) do admin = rdkafka_config.admin { @@ -135,38 +85,14 @@ def notify_listener(listener, &block) rake_test_topic: 3, watermarks_test_topic: 3, partitioner_test_topic: 25, - example_topic: 1 }.each do |topic, partitions| create_topic_handle = admin.create_topic(topic.to_s, partitions, 1) begin - create_topic_handle.wait(max_wait_timeout: 1.0) + create_topic_handle.wait(max_wait_timeout: 15) rescue Rdkafka::RdkafkaError => ex raise unless ex.message.match?(/topic_already_exists/) end end admin.close end - - config.around(:each) do |example| - # Timeout specs after a minute. If they take longer - # they are probably stuck - Timeout::timeout(60) do - example.run - end - end -end - -class RdKafkaTestConsumer - def self.with - consumer = Rdkafka::Bindings.rd_kafka_new( - :rd_kafka_consumer, - nil, - nil, - 0 - ) - yield consumer - ensure - Rdkafka::Bindings.rd_kafka_consumer_close(consumer) - Rdkafka::Bindings.rd_kafka_destroy(consumer) - end end