CSV
Read and write CSV files with configurable delimiters, headers, and quoting
Spring Batch RS provides a rich set of item readers and writers for various data sources and formats. All readers and writers use the builder pattern for easy, type-safe configuration.
Enable only the features you need to keep your dependencies minimal:
| Feature | Description | Status |
|---|---|---|
csv | CSV file reading and writing | ✅ Stable |
json | JSON file reading and writing | ✅ Stable |
xml | XML file reading and writing | ✅ Stable |
mongodb | MongoDB database integration | ✅ Stable |
rdbc-postgres | PostgreSQL via RDBC | ✅ Stable |
rdbc-mysql | MySQL/MariaDB via RDBC | ✅ Stable |
rdbc-sqlite | SQLite via RDBC | ✅ Stable |
orm | SeaORM integration | ✅ Stable |
fake | Mock data generation | ✅ Stable |
logger | Debug logging writer | ✅ Stable |
CSV
Read and write CSV files with configurable delimiters, headers, and quoting
JSON
Handle JSON arrays and objects with pretty printing support
XML
Process XML documents with custom root and item elements
RDBC (Generic SQL)
Direct database access for PostgreSQL, MySQL, and SQLite using SQLx
SeaORM
Type-safe ORM with pagination and filtering
MongoDB
Native MongoDB document operations
Fake Data Generator
Generate realistic mock data for testing
Logger Writer
Debug output for development
use spring_batch_rs::item::csv::CsvItemReaderBuilder;use serde::Deserialize;
#[derive(Deserialize)]struct Product { id: u32, name: String, price: f64,}
let reader = CsvItemReaderBuilder::<Product>::new() .has_headers(true) .delimiter(b',') .from_path("products.csv")?;use spring_batch_rs::item::json::JsonItemWriterBuilder;
let writer = JsonItemWriterBuilder::<MyType>::new() .pretty_formatter(true) .from_path("output.json")?;use spring_batch_rs::item::rdbc::rdbc_reader::{RdbcItemReaderBuilder, RdbcRowMapper};use sqlx::{AnyPool, Row};
struct ProductRowMapper;
impl RdbcRowMapper<Product> for ProductRowMapper { fn map_row(&self, row: &sqlx::any::AnyRow) -> Product { Product { id: row.get("id"), name: row.get("name"), price: row.get("price"), } }}
let pool = AnyPool::connect("postgresql://user:pass@localhost/db").await?;
let reader = RdbcItemReaderBuilder::new() .pool(&pool) .query("SELECT id, name, price FROM products") .page_size(1000) .row_mapper(&ProductRowMapper) .build();You can create custom implementations by implementing the respective traits:
use spring_batch_rs::core::item::ItemReader;use spring_batch_rs::BatchError;
struct MyCustomReader { data: Vec<String>, index: usize,}
impl ItemReader<String> for MyCustomReader { fn read(&mut self) -> Result<Option<String>, BatchError> { if self.index < self.data.len() { let item = self.data[self.index].clone(); self.index += 1; Ok(Some(item)) } else { Ok(None) // No more items } }}use spring_batch_rs::core::item::ItemWriter;use spring_batch_rs::BatchError;
struct MyCustomWriter { output: Vec<String>,}
impl ItemWriter<String> for MyCustomWriter { fn write(&mut self, items: &[String]) -> Result<(), BatchError> { for item in items { self.output.push(item.clone()); // Custom write logic here } Ok(()) }}Choose appropriate chunk sizes based on your data and memory:
// Small chunks (10-50): Memory-constrained environments.chunk(25)
// Medium chunks (100-500): Balanced performance.chunk(250)
// Large chunks (1000+): High-throughput scenarios.chunk(1000)For database readers, configure page sizes to optimize memory:
let reader = RdbcItemReaderBuilder::new() .page_size(500) // Fetch 500 records at a time .build();File-based readers and writers use buffered I/O by default for optimal performance.
Explore detailed documentation for each reader and writer type: