feat: add support for array_position expression#3172
feat: add support for array_position expression#3172andygrove wants to merge 29 commits intoapache:mainfrom
Conversation
Implements Spark's array_position function which returns the 1-based position of an element in an array, returning 0 if not found. This required a custom Rust implementation because DataFusion's array_position returns UInt64 and null when not found, while Spark returns Int64 (LongType) and 0. Key implementation details: - Returns Int64 to match Spark's LongType - Returns 0 when element is not found (Spark behavior) - Returns null when array is null or search element is null - Supports both List and LargeList array types Closes apache#3153 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3172 +/- ##
============================================
+ Coverage 56.12% 59.96% +3.84%
- Complexity 976 1462 +486
============================================
Files 119 175 +56
Lines 11743 16180 +4437
Branches 2251 2684 +433
============================================
+ Hits 6591 9703 +3112
- Misses 4012 5128 +1116
- Partials 1140 1349 +209 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
# Conflicts: # docs/source/user-guide/latest/configs.md # native/spark-expr/src/comet_scalar_funcs.rs
|
Moving this to draft until #3328 is merged |
Move array_position tests from CometArrayExpressionSuite to a SQL file test and fall back to Spark when all arguments are literals. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…o feature/array-position # Conflicts: # native/spark-expr/src/array_funcs/mod.rs # native/spark-expr/src/comet_scalar_funcs.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove stray array_repeat references from merge conflict resolution. Add NULL val row to test data and add tests for all supported array element types: boolean, tinyint, smallint, bigint, float, double, decimal, date, and timestamp. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Just wondering can we reuse DF? the builtin function gets optimized in apache/datafusion#20532 |
The DF implementation isn't compatible with Spark though. |
… review feedback - Use typed array downcasting instead of ScalarValue for element comparison, improving performance from 0.4X to 0.7-0.8X of Spark - Add getSupportLevel override marking as Incompatible (NaN equality) - Add NaN edge case tests for float/double arrays - Add CometArrayExpressionBenchmark microbenchmark - Make spark_array_position function private - Update docs to mark array_position as supported
Treat NaN == NaN in float/double comparisons, matching Spark's ordering.equiv() behavior. This makes array_position Compatible rather than Incompatible.
Avoid per-row subarray allocation from list_array.value(row_index). Instead, downcast the flat values buffer once and iterate using offset ranges directly. Improves from 0.7-0.8X to 0.9X of Spark.
Switches benchmark to use SCAN_NATIVE_DATAFUSION for the Comet cases, avoiding JVM parquet reader overhead. Results now show Comet is 1.1-1.2X faster than Spark.
# Conflicts: # native/spark-expr/src/comet_scalar_funcs.rs # spark/src/test/scala/org/apache/spark/sql/benchmark/CometArrayExpressionBenchmark.scala
|
Claude summarized my notes for me. Hopefully it didn't transcribe anything wrong or hallucinate :)
|
- Compute combined null buffer upfront via NullBuffer::union and use Vec<i64> with Int64Array::new() instead of Vec<Option<i64>>, avoiding per-row null tracking overhead in all typed paths - Use TypeSignature::Any(2) instead of variadic_any for precise arity - Replace .unwrap() on downcast with .ok_or_else() for safer error handling - Add nested array test cases to exercise position_fallback code path
# Conflicts: # native/spark-expr/src/comet_scalar_funcs.rs # spark/src/main/scala/org/apache/comet/serde/arrays.scala
comphead
left a comment
There was a problem hiding this comment.
Thanks @andygrove we would need to backport it to datafusion-spark crate.
For testing should we test all possible datatypes?
I added |
Also added some nested array columnar tests |
# Conflicts: # spark/src/main/scala/org/apache/comet/serde/arrays.scala
Which issue does this PR close?
Closes #3157.
Closes #3153.
Rationale for this change
Spark's
array_positionfunction is not currently accelerated by Comet. Adding native support allows this expression to run on the native execution engine.What changes are included in this PR?
Adds native Comet support for Spark's
array_positionfunction, which returns the 1-based position of an element in an array, or 0 if not found.This required a custom Rust implementation because DataFusion's
array_positionreturnsUInt64andnullwhen not found, while Spark returnsInt64(LongType) and0.Key implementation details:
Int64to match Spark'sLongType0when element is not found (Spark behavior)nullwhen array isnullor search element isnullListandLargeListarray typesordering.equiv()semantics (NaN == NaN)Benchmark Results
Comet native execution is 1.1-1.2X faster than Spark for this expression when using the native DataFusion scan.
How are these changes tested?
SQL file-based tests in
spark/src/test/resources/sql-tests/expressions/array/array_position.sqlcovering:CometArrayExpressionBenchmark