The date_parse_input_handler extension allows for configuring how input to date parsing methods should be handled. By default, the extension does not change behavior. However, you can use the Sequel.date_parse_input_handler
method to support custom handling of input strings to the date parsing methods. For example, if you want to implement a length check to prevent denial of service vulnerabilities in older versions of Ruby, you can do:
Sequel.extension :date_parse_input_handler Sequel.date_parse_input_handler do |string| raise Sequel::InvalidValue, "string length (200) exceeds the limit 128" if string.bytesize > 128 string end
You can also use Sequel.date_parse_input_handler
to modify the string that will be passed to the parsing methods. For example, you could truncate it:
Sequel.date_parse_input_handler do |string| string.b[0, 128] end
Be aware that modern versions of Ruby will raise an exception if date parsing input exceeds 128 bytes.
The duplicate_columns_handler extension allows you to customize handling of duplicate column names in your queries on a per-database or per-dataset level.
For example, you may want to raise an exception if you join 2 tables together which contains a column that will override another columns.
To use the extension, you need to load the extension into the database:
DB.extension :duplicate_columns_handler
or into individual datasets:
ds = DB[:items].extension(:duplicate_columns_handler)
If the Database
option :on_duplicate_columns is set, it configures how this extension works. The value should be # or any object that responds to :call.
on_duplicate_columns: :raise # or 'raise' on_duplicate_columns: :warn # or 'warn' on_duplicate_columns: :ignore # or anything unrecognized on_duplicate_columns: lambda{|columns| arbitrary_condition? ? :raise : :warn}
You may also configure duplicate columns handling for a specific dataset:
ds.on_duplicate_columns(:warn) ds.on_duplicate_columns(:raise) ds.on_duplicate_columns(:ignore) ds.on_duplicate_columns{|columns| arbitrary_condition? ? :raise : :warn} ds.on_duplicate_columns(lambda{|columns| arbitrary_condition? ? :raise : :warn})
If :raise or ‘raise’ is specified, a Sequel::DuplicateColumnError
is raised. If :warn or ‘warn’ is specified, you will receive a warning via warn
. If a callable is specified, it will be called. For other values, duplicate columns are ignored (Sequel’s default behavior) If no on_duplicate_columns is specified, the default is :warn.
Related module: Sequel::DuplicateColumnsHandler
This extension changes Sequel’s postgres adapter to automatically parameterize queries by default. Sequel’s default behavior has always been to literalize all arguments unless specifically using parameters (via :$arg placeholders and the Dataset#prepare/call methods). This extension makes Sequel
use string, numeric, blob, date, and time types as parameters. Example:
# Default DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = 1 DB.extension :pg_auto_parameterize DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = $1 (args: [1])
Other pg_* extensions that ship with Sequel
and add support for PostgreSQL-specific types support automatically parameterizing those types when used with this extension.
This extension is not generally faster than the default behavior. In some cases it is faster, such as when using large strings. However, the use of parameters avoids potential security issues, in case Sequel
does not correctly literalize one of the arguments that this extension would automatically parameterize.
There are some known issues with automatic parameterization:
-
In order to avoid most type errors, the extension attempts to guess the appropriate type and automatically casts most placeholders, except plain Ruby strings (which PostgreSQL treats as an unknown type).
Unfortunately, if the type guess is incorrect, or a plain Ruby string is used and PostgreSQL cannot determine the data type for it, the query may result in a
DatabaseError
. To fix both issues, you can explicitly cast values usingSequel.cast(value, type)
, andSequel
will cast to that type. -
PostgreSQL supports a maximum of 65535 parameters per query. Attempts to use a query with more than this number of parameters will result in a
Sequel::DatabaseError
being raised.Sequel
tries to mitigate this issue by turningcolumn IN (int, ...)
queries intocolumn = ANY(CAST($ AS int8[]))
using an array parameter, to reduce the number of parameters. It also limits inserting multiple rows at once to a maximum of 40 rows per query by default. While these mitigations handle the most common cases where a large number of parameters would be used, there are other cases. -
Automatic parameterization will consider the same objects as equivalent when building
SQL
. However, for performance, it does not perform equality checks. So code such as:DB[:t].select{foo('a').as(:f)}.group{foo('a')} # SELECT foo('a') AS "f" FROM "t" GROUP BY foo('a')
Will get auto paramterized as:
# SELECT foo($1) AS "f" FROM "t" GROUP BY foo($2)
Which will result in a
DatabaseError
, since that is not validSQL
.If you use the same expression, it will use the same parameter:
foo = Sequel.function(:foo, 'a') DB[:t].select(foo.as(:f)).group(foo) # SELECT foo($1) AS "f" FROM "t" GROUP BY foo($1)
Note that Dataset#select_group and similar methods that take arguments used in multiple places in the
SQL
will generally handle this automatically, since they will use the same objects:DB[:t].select_group{foo('a').as(:f)} # SELECT foo($1) AS "f" FROM "t" GROUP BY foo($1)
You can work around any issues that come up by disabling automatic parameterization by calling the no_auto_parameterize
method on the dataset (which returns a clone of the dataset). You can avoid parameterization for specific values in the query by wrapping them with Sequel.skip_pg_auto_param
.
It is likely there are corner cases not mentioned above when using this extension. Users are encouraged to provide feedback when using this extension if they come across such corner cases.
This extension is only compatible when using the pg driver, not when using the sequel-postgres-pr, jeremyevans-postgres-pr, or postgres-pr drivers, as those do not support bound variables.
Related module: Sequel::Postgres::AutoParameterize
The pg_auto_parameterize_in_array extension builds on the pg_auto_parameterize extension, adding support for handling additional types when converting from IN to = ANY and NOT IN to != ALL:
DB[:table].where(column: [1.0, 2.0, ...]) # Without extension: column IN ($1::numeric, $2:numeric, ...) # bound variables: 1.0, 2.0, ... # With extension: column = ANY($1::numeric[]) # bound variables: [1.0, 2.0, ...]
This prevents the use of an unbounded number of bound variables based on the size of the array, as well as using different SQL
for different array sizes.
The following types are supported when doing the conversions, with the database type used:
Float |
if any are infinite or NaN, double precision, otherwise numeric |
BigDecimal |
numeric |
Date |
date |
Time |
timestamp (or timestamptz if pg_timestamptz extension is used) |
DateTime |
timestamp (or timestamptz if pg_timestamptz extension is used) |
Sequel::SQLTime |
time |
Sequel::SQL::Blob |
bytea |
String
values are also supported using the text
type, but only if the :treat_string_list_as_text_array
Database
option is used. This is because treating strings as text can break programs, since the type for literal strings in PostgreSQL is unknown
, not text
.
The conversion is only done for single dimensional arrays that have more than two elements, where all elements are of the same class (other than nil values).
Related module: Sequel::Postgres::AutoParameterizeInArray
:nocov:
The pg_range extension adds support for the PostgreSQL 9.2+ range types to Sequel
. PostgreSQL range types are similar to ruby’s Range
class, representating an array of values. However, they are more flexible than ruby’s ranges, allowing exclusive beginnings and endings (ruby’s range only allows exclusive endings).
When PostgreSQL range values are retreived, they are parsed and returned as instances of Sequel::Postgres::PGRange
. PGRange mostly acts like a Range
, but it’s not a Range
as not all PostgreSQL range type values would be valid ruby ranges. If the range type value you are using is a valid ruby range, you can call PGRange#to_range to get a Range
. However, if you call PGRange#to_range on a range type value uses features that ruby’s Range
does not support, an exception will be raised.
In addition to the parser, this extension comes with literalizers for PGRange and Range
, so they can be used in queries and as bound variables.
To turn an existing Range
into a PGRange, use Sequel.pg_range:
Sequel.pg_range(range)
If you have loaded the core_extensions extension, or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Range#pg_range
:
range.pg_range
You may want to specify a specific range type:
Sequel.pg_range(range, :daterange) range.pg_range(:daterange)
If you specify the range database type, Sequel
will automatically cast the value to that type when literalizing.
To use this extension, load it into the Database
instance:
DB.extension :pg_range
See the schema modification guide for details on using range type columns in CREATE/ALTER TABLE statements.
This extension makes it easy to add support for other range types. In general, you just need to make sure that the subtype is handled and has the appropriate converter installed. For user defined types, you can do this via:
DB.add_conversion_proc(subtype_oid){|string| }
Then you can call Sequel::Postgres::PGRange::DatabaseMethods#register_range_type
to automatically set up a handler for the range type. So if you want to support the timerange type (assuming the time type is already supported):
DB.register_range_type('timerange')
This extension integrates with the pg_array extension. If you plan to use arrays of range types, load the pg_array extension before the pg_range extension:
DB.extension :pg_array, :pg_range
Related module: Sequel::Postgres::PGRange
The round_timestamps extension will automatically round timestamp values to the database’s supported level of precision before literalizing them.
For example, if the database supports millisecond precision, and you give it a Time value with microsecond precision, it will round it appropriately:
Time.at(1405341161.917999982833862) # default: 2014-07-14 14:32:41.917999 # with extension: 2014-07-14 14:32:41.918000
The round_timestamps extension correctly deals with databases that support millisecond or second precision. In addition to handling Time values, it also handles DateTime values and Sequel::SQLTime values (for the TIME type).
To round timestamps for a single dataset:
ds = ds.extension(:round_timestamps)
To round timestamps for all datasets on a single database:
DB.extension(:round_timestamps)
Related module: Sequel::Dataset::RoundTimestamps
Classes and Modules
- Sequel::AnyNotEmpty
- Sequel::ArbitraryServers
- Sequel::AutoCastDateAndTime
- Sequel::CallerLogging
- Sequel::ColumnsIntrospection
- Sequel::ConnectionExpiration
- Sequel::ConnectionValidator
- Sequel::ConstantSqlOverride
- Sequel::ConstraintValidations
- Sequel::CoreRefinements
- Sequel::CurrentDateTimeTimestamp
- Sequel::DatabaseQuery
- Sequel::DatasetPagination
- Sequel::DatasetPrinter
- Sequel::DatasetQuery
- Sequel::DatasetRun
- Sequel::DateParseInputHandler
- Sequel::DateTimeParseToTime
- Sequel::DuplicateColumnsHandler
- Sequel::EmptyArrayConsiderNulls
- Sequel::ErrorSQL
- Sequel::EvalInspect
- Sequel::ExcludeOrNull
- Sequel::FiberConcurrency
- Sequel::GraphEach
- Sequel::IdentifierMangling
- Sequel::IndexCaching
- Sequel::Integer64
- Sequel::LooserTypecasting
- Sequel::MSSQL
- Sequel::Model
- Sequel::NamedTimezones
- Sequel::Plugins
- Sequel::Postgres
- Sequel::PrettyTable
- Sequel::S
- Sequel::SQL
- Sequel::SQLComments
- Sequel::SQLLogNormalizer
- Sequel::SQLite
- Sequel::Schema
- Sequel::SchemaCaching
- Sequel::SchemaDumper
- Sequel::SelectRemove
- Sequel::Sequel4DatasetMethods
- Sequel::ServerBlock
- Sequel::ServerLogging
- Sequel::SymbolAref
- Sequel::SymbolAs
- Sequel::TemporarilyReleaseConnection
- Sequel::ThreadLocalTimezones
- Sequel::ThreadedServerBlock
- Sequel::TransactionConnectionValidator
- Sequel::UnthreadedServerBlock
- Database
- Sequel::DatabaseError
- Sequel::Dataset
- Sequel::DuplicateColumnError
- Sequel::IntegerMigrator
- Sequel::LiteralString
- Sequel::Migration
- Sequel::MigrationAlterTableReverser
- Sequel::MigrationDSL
- Sequel::MigrationReverser
- Sequel::Migrator
- Sequel::SimpleMigration
- Sequel::StdioLogger
- Sequel::TimestampMigrator
- Sequel::ToDot
- Sequel::UnableToReacquireConnectionError
Public Class methods
This extension loads the core extensions.
# File lib/sequel/extensions/core_extensions.rb 11 def Sequel.core_extensions? 12 true 13 end
The preferred method for writing Sequel
migrations, using a DSL:
Sequel.migration do up do create_table(:artists) do primary_key :id String :name end end down do drop_table(:artists) end end
Designed to be used with the Migrator
class, part of the migration
extension.
# File lib/sequel/extensions/migration.rb 308 def self.migration(&block) 309 MigrationDSL.create(&block) 310 end