A dataset represents an SQL
query. Datasets can be used to select, insert, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].where(author: 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Datasets are frozen and use a functional style where modification methods return modified copies of the the dataset. This allows you to reuse datasets:
posts = DB[:posts] davids_posts = posts.where(author: 'david') old_posts = posts.where{stamp < Date.today - 7} davids_old_posts = davids_posts.where{stamp < Date.today - 7}
Datasets are Enumerable objects, so they can be manipulated using many of the Enumerable methods, such as map
and inject
. Note that there are some methods that Dataset
defines that override methods defined in Enumerable and result in different behavior, such as select
and group_by
.
For more information, see the “Dataset Basics” guide.
Methods
Public Class
Public Instance
- <<
- ==
- []
- add_graph_aliases
- aliased_expression_sql_append
- all
- array_sql_append
- as_hash
- avg
- bind
- boolean_constant_sql_append
- call
- case_expression_sql_append
- cast_sql_append
- clone
- column_all_sql_append
- columns
- columns!
- complex_expression_sql_append
- constant_sql_append
- count
- current_datetime
- db
- delayed_evaluation_sql_append
- delete
- distinct
- dup
- each
- each_server
- empty?
- eql?
- escape_like
- except
- exclude
- exclude_having
- exists
- extension
- filter
- first
- first!
- first_source
- first_source_alias
- first_source_table
- for_update
- freeze
- from
- from_self
- function_sql_append
- get
- graph
- grep
- group
- group_and_count
- group_append
- group_by
- group_cube
- group_rollup
- grouping_sets
- hash
- having
- import
- insert
- insert_sql
- inspect
- intersect
- invert
- join
- join_clause_sql_append
- join_on_clause_sql_append
- join_table
- join_using_clause_sql_append
- joined_dataset?
- last
- lateral
- limit
- literal_append
- lock_style
- map
- max
- merge
- merge_delete
- merge_insert
- merge_sql
- merge_update
- merge_using
- min
- multi_insert
- multi_insert_sql
- naked
- negative_boolean_constant_sql_append
- nowait
- offset
- opts
- or
- order
- order_append
- order_by
- order_more
- order_prepend
- ordered_expression_sql_append
- paged_each
- placeholder_literal_string_sql_append
- placeholder_literalizer_class
- placeholder_literalizer_loader
- prepare
- provides_accurate_rows_matched?
- qualified_identifier_sql_append
- qualify
- quote_identifier_append
- quote_identifiers?
- quote_schema_table_append
- quoted_identifier_append
- recursive_cte_requires_column_aliases?
- requires_placeholder_type_specifiers?
- requires_sql_standard_datetimes?
- returning
- reverse
- reverse_order
- row_number_column
- row_proc
- schema_and_table
- select
- select_all
- select_append
- select_group
- select_hash
- select_hash_groups
- select_map
- select_more
- select_order_map
- server
- server?
- set_graph_aliases
- single_record
- single_record!
- single_value
- single_value!
- skip_limit_check
- skip_locked
- split_alias
- split_qualifiers
- sql
- subscript_sql_append
- sum
- supports_cte?
- supports_cte_in_subqueries?
- supports_deleting_joins?
- supports_derived_column_lists?
- supports_distinct_on?
- supports_group_cube?
- supports_group_rollup?
- supports_grouping_sets?
- supports_insert_select?
- supports_intersect_except?
- supports_intersect_except_all?
- supports_is_true?
- supports_join_using?
- supports_lateral_subqueries?
- supports_limits_in_correlated_subqueries?
- supports_merge?
- supports_modifying_joins?
- supports_multiple_column_in?
- supports_nowait?
- supports_offsets_in_correlated_subqueries?
- supports_ordered_distinct_on?
- supports_placeholder_literalizer?
- supports_regexp?
- supports_replace?
- supports_returning?
- supports_select_all_and_column?
- supports_skip_locked?
- supports_timestamp_timezones?
- supports_timestamp_usecs?
- supports_updating_joins?
- supports_where_true?
- supports_window_clause?
- supports_window_function_frame_option?
- supports_window_functions?
- to_hash
- to_hash_groups
- truncate
- truncate_sql
- unfiltered
- ungraphed
- ungrouped
- union
- unlimited
- unordered
- unqualified_column_for
- unused_table_alias
- update
- update_sql
- where
- where_all
- where_each
- where_single_value
- window
- window_sql_append
- with
- with_extend
- with_quote_identifiers
- with_recursive
- with_row_proc
- with_sql
- with_sql_all
- with_sql_delete
- with_sql_each
- with_sql_first
- with_sql_insert
- with_sql_single_value
Protected Instance
Contents
- 1 - Methods that return modified datasets
- 2 - Methods that execute code on the database
- 3 - User Methods relating to SQL Creation
- 4 - Methods that describe what the dataset supports
- 5 - Methods related to dataset graphing
- 6 - Miscellaneous methods
- 8 - Methods related to prepared statements or bound variables
- 9 - Internal Methods relating to SQL Creation
Classes and Modules
Constants
OPTS | = | Sequel::OPTS | ||
TRUE_FREEZE | = | RUBY_VERSION >= '2.4' |
Whether |
1 - Methods that return modified datasets
Constants
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze |
The dataset options that require the removal of cached columns if changed. |
|
CONDITIONED_JOIN_TYPES | = | [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left].freeze |
These symbols have _join methods created (e.g. inner_join) that call |
|
EMPTY_ARRAY | = | [].freeze | ||
EXTENSIONS | = | {} |
Hash of extension name symbols to callable objects to load the extension into the |
|
JOIN_METHODS | = | ((CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table]).freeze |
All methods that return modified datasets with a joined table added. |
|
NON_SQL_OPTIONS | = | [:server, :graph, :row_proc, :quote_identifiers, :skip_symbol_cache].freeze |
Which options don’t affect the |
|
QUERY_METHODS | = | ((<<-METHS).split.map(&:to_sym) + JOIN_METHODS).freeze |
Methods that return modified datasets |
|
SIMPLE_SELECT_ALL_ALLOWED_FROM | = | [Symbol, SQL::Identifier, SQL::QualifiedIdentifier].freeze |
From types allowed to be considered a simple_select_all |
|
UNCONDITIONED_JOIN_TYPES | = | [:natural, :natural_left, :natural_right, :natural_full, :cross].freeze |
These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to |
Public Instance Aliases
_clone | -> | clone |
Save original clone implementation, as some other methods need to call it internally. |
Public Class methods
Register an extension callback for Dataset
objects. ext should be the extension name symbol, and mod should be a Module that will be included in the dataset’s class. This also registers a Database
extension that will extend all of the database’s datasets.
# File lib/sequel/dataset/query.rb 55 def self.register_extension(ext, mod=nil, &block) 56 if mod 57 raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block 58 if mod.is_a?(Module) 59 block = proc{|ds| ds.extend(mod)} 60 Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} 61 Sequel.synchronize{EXTENSION_MODULES[ext] = mod} 62 else 63 block = mod 64 end 65 end 66 67 unless mod.is_a?(Module) 68 Sequel::Deprecation.deprecate("Providing a block or non-module to Sequel::Dataset.register_extension is deprecated and support for it will be removed in Sequel 6.") 69 end 70 71 Sequel.synchronize{EXTENSIONS[ext] = block} 72 end
Public Instance methods
Returns a new clone of the dataset with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS
, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb 90 def clone(opts = nil || (return self)) 91 # return self used above because clone is called by almost all 92 # other query methods, and it is the fastest approach 93 c = super(:freeze=>false) 94 c.opts.merge!(opts) 95 unless opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} 96 c.clear_columns_cache 97 end 98 c.freeze 99 end
Returns a copy of the dataset with the SQL
DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. If a block is given, it is treated as a virtual row block, similar to where
. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id
There is support for emulating the DISTINCT ON support in MySQL, but it does not support the ORDER of the dataset, and also doesn’t work in many cases if the ONLY_FULL_GROUP_BY sql_mode is used, which is the default on MySQL 5.7.5+.
# File lib/sequel/dataset/query.rb 129 def distinct(*args, &block) 130 virtual_row_columns(args, block) 131 if args.empty? 132 cached_dataset(:_distinct_ds){clone(:distinct => EMPTY_ARRAY)} 133 else 134 raise(InvalidOperation, "DISTINCT ON not supported") unless supports_distinct_on? 135 clone(:distinct => args.freeze) 136 end 137 end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation
if the operation is not supported. Options:
:alias |
Use the given value as the |
:all |
Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur |
:from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].except(DB[:other_items]) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 DB[:items].except(DB[:other_items], all: true, from_self: false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 156 def except(dataset, opts=OPTS) 157 raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 158 raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 159 compound_clone(:except, dataset, opts) 160 end
Performs the inverse of Dataset#where
. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(category: 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(category: 'software', id: 3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
Also note that SQL
uses 3-valued boolean logic (true
, false
, NULL
), so the inverse of a true condition is a false condition, and will still not match rows that were NULL originally. If you take the earlier example:
DB[:items].exclude(category: 'software') # SELECT * FROM items WHERE (category != 'software')
Note that this does not match rows where category
is NULL
. This is because NULL
is an unknown value, and you do not know whether or not the NULL
category is software
. You can explicitly specify how to handle NULL
values if you want:
DB[:items].exclude(Sequel.~(category: nil) & {category: 'software'}) # SELECT * FROM items WHERE ((category IS NULL) OR (category != 'software'))
# File lib/sequel/dataset/query.rb 186 def exclude(*cond, &block) 187 add_filter(:where, cond, true, &block) 188 end
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2} # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
See documentation for exclude for how inversion is handled in regards to SQL
3-valued boolean logic.
# File lib/sequel/dataset/query.rb 197 def exclude_having(*cond, &block) 198 add_filter(:having, cond, true, &block) 199 end
Return a clone of the dataset loaded with the given dataset extensions. If no related extension file exists or the extension does not have specific support for Dataset
objects, an error will be raised.
# File lib/sequel/dataset/query.rb 205 def extension(*exts) 206 Sequel.extension(*exts) 207 mods = exts.map{|ext| Sequel.synchronize{EXTENSION_MODULES[ext]}} 208 if mods.all? 209 with_extend(*mods) 210 else 211 with_extend(DeprecatedSingletonClassMethods).extension(*exts) 212 end 213 end
Alias for where.
# File lib/sequel/dataset/query.rb 225 def filter(*cond, &block) 226 where(*cond, &block) 227 end
Returns a cloned dataset with a :update lock style.
DB[:table].for_update # SELECT * FROM table FOR UPDATE
# File lib/sequel/dataset/query.rb 232 def for_update 233 cached_dataset(:_for_update_ds){lock_style(:update)} 234 end
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)
# File lib/sequel/dataset/query.rb 245 def from(*source, &block) 246 virtual_row_columns(source, block) 247 table_alias_num = 0 248 ctes = nil 249 source.map! do |s| 250 case s 251 when Dataset 252 if hoist_cte?(s) 253 ctes ||= [] 254 ctes += s.opts[:with] 255 s = s.clone(:with=>nil) 256 end 257 SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 258 when Symbol 259 sch, table, aliaz = split_symbol(s) 260 if aliaz 261 s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) 262 SQL::AliasedExpression.new(s, aliaz.to_sym) 263 else 264 s 265 end 266 else 267 s 268 end 269 end 270 o = {:from=>source.empty? ? nil : source.freeze} 271 o[:with] = ((opts[:with] || EMPTY_ARRAY) + ctes).freeze if ctes 272 o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 273 clone(o) 274 end
Returns a dataset selecting from the current dataset. Options:
:alias |
Controls the alias of the table |
:column_aliases |
Also aliases columns, using derived column lists. Only used in conjunction with :alias. |
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(alias: :foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo ds.from_self(alias: :foo, column_aliases: [:c1, :c2]) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
# File lib/sequel/dataset/query.rb 293 def from_self(opts=OPTS) 294 fs = {} 295 @opts.keys.each{|k| fs[k] = nil unless non_sql_option?(k)} 296 pr = proc do 297 c = clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self) 298 if cols = _columns 299 c.send(:columns=, cols) 300 end 301 c 302 end 303 304 opts.empty? ? cached_dataset(:_from_self_ds, &pr) : pr.call 305 end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions if the database supports that. Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
:all_columns |
All columns must be matched to any of the given patterns. |
:all_patterns |
All patterns must match at least one of the columns. |
:case_insensitive |
Use a case insensitive pattern match (the default is case sensitive if the database supports it). |
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\') dataset.grep([:a, :b], %w'%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\') # OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\')) dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\')) # AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', all_columns: true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\')) # AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true, all_columns: true) # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\') # AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
# File lib/sequel/dataset/query.rb 342 def grep(columns, patterns, opts=OPTS) 343 column_op = opts[:all_columns] ? :AND : :OR 344 if opts[:all_patterns] 345 conds = Array(patterns).map do |pat| 346 SQL::BooleanExpression.new(column_op, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) 347 end 348 where(SQL::BooleanExpression.new(:AND, *conds)) 349 else 350 conds = Array(columns).map do |c| 351 SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) 352 end 353 where(SQL::BooleanExpression.new(column_op, *conds)) 354 end 355 end
Returns a copy of the dataset with the results grouped by the value of the given columns. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
# File lib/sequel/dataset/query.rb 364 def group(*columns, &block) 365 virtual_row_columns(columns, block) 366 clone(:group => (columns.compact.empty? ? nil : columns.freeze)) 367 end
Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause. If a block is given, it is treated as a virtual row block, similar to where
.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(Sequel[:first_name].as(:name)).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count{substr(:first_name, 1, 1).as(:initial)}.all # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # => [{:initial=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb 395 def group_and_count(*columns, &block) 396 select_group(*columns, &block).select_append(COUNT_OF_ALL_AS_COUNT) 397 end
Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.
DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b
# File lib/sequel/dataset/query.rb 405 def group_append(*columns, &block) 406 columns = @opts[:group] + columns if @opts[:group] 407 group(*columns, &block) 408 end
Alias of group
# File lib/sequel/dataset/query.rb 370 def group_by(*columns, &block) 371 group(*columns, &block) 372 end
Adds the appropriate CUBE syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 411 def group_cube 412 raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? 413 clone(:group_options=>:cube) 414 end
Adds the appropriate ROLLUP syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 417 def group_rollup 418 raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? 419 clone(:group_options=>:rollup) 420 end
Adds the appropriate GROUPING SETS syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 423 def grouping_sets 424 raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets? 425 clone(:group_options=>:"grouping sets") 426 end
Returns a copy of the dataset with the HAVING conditions changed. See where
for argument types.
DB[:items].group(:sum).having(sum: 10) # SELECT * FROM items GROUP BY sum HAVING (sum = 10)
# File lib/sequel/dataset/query.rb 432 def having(*cond, &block) 433 add_filter(:having, cond, &block) 434 end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation
if the operation is not supported. Options:
:alias |
Use the given value as the |
:all |
Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur |
:from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], all: true, from_self: false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 453 def intersect(dataset, opts=OPTS) 454 raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 455 raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 456 compound_clone(:intersect, dataset, opts) 457 end
Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.
DB[:items].where(category: 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].where(category: 'software', id: 3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
See documentation for exclude for how inversion is handled in regards to SQL
3-valued boolean logic.
# File lib/sequel/dataset/query.rb 470 def invert 471 cached_dataset(:_invert_ds) do 472 having, where = @opts.values_at(:having, :where) 473 if having.nil? && where.nil? 474 where(false) 475 else 476 o = {} 477 o[:having] = SQL::BooleanExpression.invert(having) if having 478 o[:where] = SQL::BooleanExpression.invert(where) if where 479 clone(o) 480 end 481 end 482 end
Alias of inner_join
# File lib/sequel/dataset/query.rb 485 def join(*args, &block) 486 inner_join(*args, &block) 487 end
Returns a joined dataset. Not usually called directly, users should use the appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills in the type
argument.
Takes the following arguments:
type |
The type of join to do (e.g. :inner) | ||||||||||
table |
table to join into the current dataset. Generally one of the following types:
| ||||||||||
expr |
conditions used when joining, depends on type:
| ||||||||||
options |
a hash of options, with the following keys supported:
| ||||||||||
block |
The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the table alias/name for the table currently being joined, the table alias/name for the last joined (or first table), and an array of previous |
Examples:
DB[:a].join_table(:cross, :b) # SELECT * FROM a CROSS JOIN b DB[:a].join_table(:inner, DB[:b], c: d) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) DB[:a].join_table(:left, Sequel[:b].as(:c), [:d]) # SELECT * FROM a LEFT JOIN b AS c USING (d) DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} end # SELECT * FROM a NATURAL JOIN b INNER JOIN c # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
# File lib/sequel/dataset/query.rb 549 def join_table(type, table, expr=nil, options=OPTS, &block) 550 if hoist_cte?(table) 551 s, ds = hoist_cte(table) 552 return s.join_table(type, ds, expr, options, &block) 553 end 554 555 using_join = options[:join_using] || (expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)}) 556 if using_join && !supports_join_using? 557 h = {} 558 expr.each{|e| h[e] = e} 559 return join_table(type, table, h, options) 560 end 561 562 table_alias = options[:table_alias] 563 564 if table.is_a?(SQL::AliasedExpression) 565 table_expr = if table_alias 566 SQL::AliasedExpression.new(table.expression, table_alias, table.columns) 567 else 568 table 569 end 570 table = table_expr.expression 571 table_name = table_alias = table_expr.alias 572 elsif table.is_a?(Dataset) 573 if table_alias.nil? 574 table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 575 table_alias = dataset_alias(table_alias_num) 576 end 577 table_name = table_alias 578 table_expr = SQL::AliasedExpression.new(table, table_alias) 579 else 580 table, implicit_table_alias = split_alias(table) 581 table_alias ||= implicit_table_alias 582 table_name = table_alias || table 583 table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table 584 end 585 586 join = if expr.nil? and !block 587 SQL::JoinClause.new(type, table_expr) 588 elsif using_join 589 raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block 590 SQL::JoinUsingClause.new(expr, type, table_expr) 591 else 592 last_alias = options[:implicit_qualifier] || @opts[:last_joined_table] || first_source_alias 593 qualify_type = options[:qualify] 594 if Sequel.condition_specifier?(expr) 595 expr = expr.map do |k, v| 596 qualify_type = default_join_table_qualification if qualify_type.nil? 597 case qualify_type 598 when false 599 nil # Do no qualification 600 when :deep 601 k = Sequel::Qualifier.new(table_name).transform(k) 602 v = Sequel::Qualifier.new(last_alias).transform(v) 603 else 604 k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 605 v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 606 end 607 [k,v] 608 end 609 expr = SQL::BooleanExpression.from_value_pairs(expr) 610 end 611 if block 612 expr2 = yield(table_name, last_alias, @opts[:join] || EMPTY_ARRAY) 613 expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 614 end 615 SQL::JoinOnClause.new(expr, type, table_expr) 616 end 617 618 opts = {:join => ((@opts[:join] || EMPTY_ARRAY) + [join]).freeze} 619 opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false 620 opts[:num_dataset_sources] = table_alias_num if table_alias_num 621 clone(opts) 622 end
Marks this dataset as a lateral dataset. If used in another dataset’s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:
DB.from(:a, DB[:b].where(Sequel[:a][:c]=>Sequel[:b][:d]).lateral) # SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))
# File lib/sequel/dataset/query.rb 643 def lateral 644 cached_dataset(:_lateral_ds){clone(:lateral=>true)} 645 end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb 657 def limit(l, o = (no_offset = true; nil)) 658 return from_self.limit(l, o) if @opts[:sql] 659 660 if l.is_a?(Range) 661 no_offset = false 662 o = l.first 663 l = l.last - l.first + (l.exclude_end? ? 0 : 1) 664 end 665 l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) 666 if l.is_a?(Integer) 667 raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 668 end 669 670 ds = clone(:limit=>l) 671 ds = ds.offset(o) unless no_offset 672 ds 673 end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL
injection.
A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. for_update
).
DB[:items].lock_style('FOR SHARE NOWAIT') # SELECT * FROM items FOR SHARE NOWAIT DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED') # SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED
# File lib/sequel/dataset/query.rb 687 def lock_style(style) 688 clone(:lock => style) 689 end
Return a dataset with a WHEN MATCHED THEN DELETE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_delete # WHEN MATCHED THEN DELETE merge_delete{a > 30} # WHEN MATCHED AND (a > 30) THEN DELETE
# File lib/sequel/dataset/query.rb 700 def merge_delete(&block) 701 _merge_when(:type=>:delete, &block) 702 end
Return a dataset with a WHEN NOT MATCHED THEN INSERT clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
The arguments provided can be any arguments that would be accepted by insert
.
merge_insert(i1: :i2, a: Sequel[:b]+11) # WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11)) merge_insert(:i2, Sequel[:b]+11){a > 30} # WHEN NOT MATCHED AND (a > 30) THEN INSERT VALUES (i2, (b + 11))
# File lib/sequel/dataset/query.rb 716 def merge_insert(*values, &block) 717 _merge_when(:type=>:insert, :values=>values, &block) 718 end
Return a dataset with a WHEN MATCHED THEN UPDATE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20) # WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20) merge_update(i1: :i2){a > 30} # WHEN MATCHED AND (a > 30) THEN UPDATE SET i1 = i2
# File lib/sequel/dataset/query.rb 729 def merge_update(values, &block) 730 _merge_when(:type=>:update, :values=>values, &block) 731 end
Return a dataset with the source and join condition to use for the MERGE statement.
merge_using(:m2, i1: :i2) # USING m2 ON (i1 = i2)
# File lib/sequel/dataset/query.rb 737 def merge_using(source, join_condition) 738 clone(:merge_using => [source, join_condition].freeze) 739 end
Returns a cloned dataset without a row_proc.
ds = DB[:items].with_row_proc(:invert.to_proc) ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb 746 def naked 747 cached_dataset(:_naked_ds){with_row_proc(nil)} 748 end
Returns a copy of the dataset that will raise a DatabaseLockTimeout instead of waiting for rows that are locked by another transaction
DB[:items].for_update.nowait # SELECT * FROM items FOR UPDATE NOWAIT
# File lib/sequel/dataset/query.rb 755 def nowait 756 cached_dataset(:_nowait_ds) do 757 raise(Error, 'This dataset does not support raises errors instead of waiting for locked rows') unless supports_nowait? 758 clone(:nowait=>true) 759 end 760 end
Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override the offset if you’ve called offset first.
DB[:items].offset(10) # SELECT * FROM items OFFSET 10
# File lib/sequel/dataset/query.rb 767 def offset(o) 768 o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) 769 if o.is_a?(Integer) 770 raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 771 end 772 clone(:offset => o) 773 end
Adds an alternate filter to an existing WHERE clause using OR. If there is no WHERE clause, then the default is WHERE true, and OR would be redundant, so return the dataset in that case.
DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b DB[:items].or(:b) # SELECT * FROM items
# File lib/sequel/dataset/query.rb 781 def or(*cond, &block) 782 if @opts[:where].nil? 783 self 784 else 785 add_filter(:where, cond, false, :OR, &block) 786 end 787 end
Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL
functions. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b DB[:items].order(Sequel[:a] + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC DB[:items].order(Sequel.asc(:name, nulls: :last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb 803 def order(*columns, &block) 804 virtual_row_columns(columns, block) 805 clone(:order => (columns.compact.empty?) ? nil : columns.freeze) 806 end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_append(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb 813 def order_append(*columns, &block) 814 columns = @opts[:order] + columns if @opts[:order] 815 order(*columns, &block) 816 end
Alias of order
# File lib/sequel/dataset/query.rb 819 def order_by(*columns, &block) 820 order(*columns, &block) 821 end
Alias of order_append.
# File lib/sequel/dataset/query.rb 824 def order_more(*columns, &block) 825 order_append(*columns, &block) 826 end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb 833 def order_prepend(*columns, &block) 834 ds = order(*columns, &block) 835 @opts[:order] ? ds.order_append(*@opts[:order]) : ds 836 end
Qualify to the given table, or first source if no table is given.
DB[:items].where(id: 1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].where(id: 1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb 845 def qualify(table=(cache=true; first_source)) 846 o = @opts 847 return self if o[:sql] 848 849 pr = proc do 850 h = {} 851 (o.keys & QUALIFY_KEYS).each do |k| 852 h[k] = qualified_expression(o[k], table) 853 end 854 h[:select] = [SQL::ColumnAll.new(table)].freeze if !o[:select] || o[:select].empty? 855 clone(h) 856 end 857 858 cache ? cached_dataset(:_qualify_ds, &pr) : pr.call 859 end
Modify the RETURNING clause, only supported on a few databases. If returning is used, instead of insert returning the autogenerated primary key or update/delete returning the number of modified rows, results are returned using fetch_rows
.
DB[:items].returning # RETURNING * DB[:items].returning(nil) # RETURNING NULL DB[:items].returning(:id, :name) # RETURNING id, name DB[:items].returning.insert(a: 1) do |hash| # hash for each row inserted, with values for all columns end DB[:items].returning.update(a: 1) do |hash| # hash for each row updated, with values for all columns end DB[:items].returning.delete(a: 1) do |hash| # hash for each row deleted, with values for all columns end
# File lib/sequel/dataset/query.rb 879 def returning(*values) 880 if values.empty? 881 cached_dataset(:_returning_ds) do 882 raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) 883 clone(:returning=>EMPTY_ARRAY) 884 end 885 else 886 raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) 887 clone(:returning=>values.freeze) 888 end 889 end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb 898 def reverse(*order, &block) 899 if order.empty? && !block 900 cached_dataset(:_reverse_ds){order(*invert_order(@opts[:order]))} 901 else 902 virtual_row_columns(order, block) 903 order(*invert_order(order.empty? ? @opts[:order] : order.freeze)) 904 end 905 end
Alias of reverse
# File lib/sequel/dataset/query.rb 908 def reverse_order(*order, &block) 909 reverse(*order, &block) 910 end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to where
.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb 919 def select(*columns, &block) 920 virtual_row_columns(columns, block) 921 clone(:select => columns.freeze) 922 end
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items DB[:items].select_all(:items) # SELECT items.* FROM items DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
# File lib/sequel/dataset/query.rb 931 def select_all(*tables) 932 if tables.empty? 933 cached_dataset(:_select_all_ds){clone(:select => nil)} 934 else 935 select(*tables.map{|t| i, a = split_alias(t); a || i}.map!{|t| SQL::ColumnAll.new(t)}.freeze) 936 end 937 end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb 946 def select_append(*columns, &block) 947 cur_sel = @opts[:select] 948 if !cur_sel || cur_sel.empty? 949 unless supports_select_all_and_column? 950 return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_append(*columns, &block) 951 end 952 cur_sel = [WILDCARD] 953 end 954 select(*(cur_sel + columns), &block) 955 end
Set both the select and group clauses with the given columns
. Column aliases may be supplied, and will be included in the select clause. This also takes a virtual row block similar to where
.
DB[:items].select_group(:a, :b) # SELECT a, b FROM items GROUP BY a, b DB[:items].select_group(Sequel[:c].as(:a)){f(c2)} # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
# File lib/sequel/dataset/query.rb 966 def select_group(*columns, &block) 967 virtual_row_columns(columns, block) 968 select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) 969 end
Alias for select_append.
# File lib/sequel/dataset/query.rb 972 def select_more(*columns, &block) 973 select_append(*columns, &block) 974 end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb 985 def server(servr) 986 clone(:server=>servr) 987 end
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.
# File lib/sequel/dataset/query.rb 992 def server?(server) 993 if db.sharded? && !opts[:server] 994 server(server) 995 else 996 self 997 end 998 end
Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.
# File lib/sequel/dataset/query.rb 1001 def skip_limit_check 1002 cached_dataset(:_skip_limit_check_ds) do 1003 clone(:skip_limit_check=>true) 1004 end 1005 end
Skip locked rows when returning results from this dataset.
# File lib/sequel/dataset/query.rb 1008 def skip_locked 1009 cached_dataset(:_skip_locked_ds) do 1010 raise(Error, 'This dataset does not support skipping locked rows') unless supports_skip_locked? 1011 clone(:skip_locked=>true) 1012 end 1013 end
Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).unfiltered # SELECT * FROM items GROUP BY a
# File lib/sequel/dataset/query.rb 1019 def unfiltered 1020 cached_dataset(:_unfiltered_ds){clone(:where => nil, :having => nil)} 1021 end
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).ungrouped # SELECT * FROM items WHERE b
# File lib/sequel/dataset/query.rb 1027 def ungrouped 1028 cached_dataset(:_ungrouped_ds){clone(:group => nil, :having => nil)} 1029 end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
:alias |
Use the given value as the |
:all |
Set to true to use UNION ALL instead of UNION, so duplicate rows can occur |
:from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].union(DB[:other_items]) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 DB[:items].union(DB[:other_items], all: true, from_self: false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 1047 def union(dataset, opts=OPTS) 1048 compound_clone(:union, dataset, opts) 1049 end
Returns a copy of the dataset with no limit or offset.
DB[:items].limit(10, 20).unlimited # SELECT * FROM items
# File lib/sequel/dataset/query.rb 1054 def unlimited 1055 cached_dataset(:_unlimited_ds){clone(:limit=>nil, :offset=>nil)} 1056 end
Returns a copy of the dataset with no order.
DB[:items].order(:a).unordered # SELECT * FROM items
# File lib/sequel/dataset/query.rb 1061 def unordered 1062 cached_dataset(:_unordered_ds){clone(:order=>nil)} 1063 end
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
Accepts the following argument types:
Hash, Array of pairs |
list of equality/inclusion expressions |
Symbol |
taken as a boolean column argument (e.g. WHERE active) |
Sequel::SQL::BooleanExpression , Sequel::LiteralString |
an existing condition expression, probably created using the |
where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the “Virtual Rows” guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].where(id: 3) # SELECT * FROM items WHERE (id = 3) DB[:items].where(Sequel.lit('price < ?', 100)) # SELECT * FROM items WHERE price < 100 DB[:items].where([[:id, [1,2,3]], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].where(Sequel.lit('price < 100')) # SELECT * FROM items WHERE price < 100 DB[:items].where(:active) # SELECT * FROM items WHERE :active DB[:items].where{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple where calls can be chained for scoping:
software = dataset.where(category: 'software').where{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the “Dataset Filtering” guide for more examples and details.
# File lib/sequel/dataset/query.rb 1107 def where(*cond, &block) 1108 add_filter(:where, cond, &block) 1109 end
Return a clone of the dataset with an addition named window that can be referenced in window functions. See Sequel::SQL::Window
for a list of options that can be passed in. Example:
DB[:items].window(:w, partition: :c1, order: :c2) # SELECT * FROM items WINDOW w AS (PARTITION BY c1 ORDER BY c2)
# File lib/sequel/dataset/query.rb 1117 def window(name, opts) 1118 clone(:window=>((@opts[:window]||EMPTY_ARRAY) + [[name, SQL::Window.new(opts)].freeze]).freeze) 1119 end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query.
Options:
:args |
Specify the arguments/columns for the CTE, should be an array of symbols. |
:recursive |
Specify that this is a recursive CTE |
:materialized |
Set to false to force inlining of the CTE, or true to force not inlining the CTE (PostgreSQL 12+/SQLite 3.35+). |
DB[:items].with(:items, DB[:syx].where(Sequel[:name].like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
# File lib/sequel/dataset/query.rb 1132 def with(name, dataset, opts=OPTS) 1133 raise(Error, 'This dataset does not support common table expressions') unless supports_cte? 1134 if hoist_cte?(dataset) 1135 s, ds = hoist_cte(dataset) 1136 s.with(name, ds, opts) 1137 else 1138 clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]).freeze) 1139 end 1140 end
Create a subclass of the receiver’s class, and include the given modules into it. If a block is provided, a DatasetModule
is created using the block and is included into the subclass. Create an instance of the subclass using the same db and opts, so that the returned dataset operates similarly to a clone extended with the given modules. This approach is used to avoid singleton classes, which significantly improves performance.
Note that like Object#extend, when multiple modules are provided as arguments the subclass includes the modules in reverse order.
# File lib/sequel/dataset/query.rb 1221 def with_extend(*mods, &block) 1222 c = Class.new(self.class) 1223 c.include(*mods) unless mods.empty? 1224 c.include(DatasetModule.new(&block)) if block 1225 o = c.freeze.allocate 1226 o.instance_variable_set(:@db, @db) 1227 o.instance_variable_set(:@opts, @opts) 1228 o.instance_variable_set(:@cache, {}) 1229 if cols = cache_get(:_columns) 1230 o.send(:columns=, cols) 1231 end 1232 o.freeze 1233 end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE.
Options:
:args |
Specify the arguments/columns for the CTE, should be an array of symbols. |
:union_all |
Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts. |
PostgreSQL 14+ Options:
:cycle |
Stop recursive searching when a cycle is detected. Includes two columns in the result of the CTE, a cycle column indicating whether a cycle was detected for the current row, and a path column for the path traversed to get to the current row. If given, must be a hash with the following keys:
| ||||||||||
:search |
Include an order column in the result of the CTE that allows for breadth or depth first searching. If given, must be a hash with the following keys:
|
DB[:t].with_recursive(:t, DB[:i1].select(:id, :parent_id).where(parent_id: nil), DB[:i1].join(:t, id: :parent_id).select(Sequel[:i1][:id], Sequel[:i1][:parent_id]), args: [:id, :parent_id]) # WITH RECURSIVE t(id, parent_id) AS ( # SELECT id, parent_id FROM i1 WHERE (parent_id IS NULL) # UNION ALL # SELECT i1.id, i1.parent_id FROM i1 INNER JOIN t ON (t.id = i1.parent_id) # ) SELECT * FROM t DB[:t].with_recursive(:t, DB[:i1].where(parent_id: nil), DB[:i1].join(:t, id: :parent_id).select_all(:i1), search: {by: :id, type: :breadth}, cycle: {columns: :id, cycle_value: 1, noncycle_value: 2}) # WITH RECURSIVE t AS ( # SELECT * FROM i1 WHERE (parent_id IS NULL) # UNION ALL # (SELECT i1.* FROM i1 INNER JOIN t ON (t.id = i1.parent_id)) # ) # SEARCH BREADTH FIRST BY id SET ordercol # CYCLE id SET is_cycle TO 1 DEFAULT 2 USING path # SELECT * FROM t
# File lib/sequel/dataset/query.rb 1198 def with_recursive(name, nonrecursive, recursive, opts=OPTS) 1199 raise(Error, 'This dataset does not support common table expressions') unless supports_cte? 1200 if hoist_cte?(nonrecursive) 1201 s, ds = hoist_cte(nonrecursive) 1202 s.with_recursive(name, ds, recursive, opts) 1203 elsif hoist_cte?(recursive) 1204 s, ds = hoist_cte(recursive) 1205 s.with_recursive(name, nonrecursive, ds, opts) 1206 else 1207 clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]).freeze) 1208 end 1209 end
Returns a cloned dataset with the given row_proc.
ds = DB[:items] ds.all # => [{:id=>2}] ds.with_row_proc(:invert.to_proc).all # => [{2=>:id}]
# File lib/sequel/dataset/query.rb 1250 def with_row_proc(callable) 1251 clone(:row_proc=>callable) 1252 end
Returns a copy of the dataset with the static SQL
used. This is useful if you want to keep the same row_proc/graph, but change the SQL
used to custom SQL
.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL
and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, b: 1) # INSERT INTO items (b) VALUES (1)
Note that datasets that specify custom SQL
using this method will generally ignore future dataset methods that modify the SQL
used, as specifying custom SQL
overrides Sequel’s SQL
generator. You should probably limit yourself to the following dataset methods when using this method, or use the implicit_subquery extension:
-
each
-
all
-
single_record
(if only one record could be returned) -
single_value
(if only one record could be returned, and a single column is selected) -
map
-
delete (if a DELETE statement)
-
update (if an UPDATE statement, with no arguments)
-
insert (if an INSERT statement, with no arguments)
-
truncate (if a TRUNCATE statement, with no arguments)
# File lib/sequel/dataset/query.rb 1284 def with_sql(sql, *args) 1285 if sql.is_a?(Symbol) 1286 sql = public_send(sql, *args) 1287 else 1288 sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 1289 end 1290 clone(:sql=>sql) 1291 end
Protected Instance methods
Add the dataset to the list of compounds
# File lib/sequel/dataset/query.rb 1296 def compound_clone(type, dataset, opts) 1297 if dataset.is_a?(Dataset) && dataset.opts[:with] && !supports_cte_in_compounds? 1298 s, ds = hoist_cte(dataset) 1299 return s.compound_clone(type, ds, opts) 1300 end 1301 ds = compound_from_self.clone(:compounds=>(Array(@opts[:compounds]).map(&:dup) + [[type, dataset.compound_from_self, opts[:all]].freeze]).freeze) 1302 opts[:from_self] == false ? ds : ds.from_self(opts) 1303 end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb 1306 def options_overlap(opts) 1307 !(@opts.map{|k,v| k unless v.nil?}.compact & opts).empty? 1308 end
Whether this dataset is a simple select from an underlying table, such as:
SELECT * FROM table SELECT table.* FROM table
# File lib/sequel/dataset/query.rb 1317 def simple_select_all? 1318 return false unless (f = @opts[:from]) && f.length == 1 1319 o = @opts.reject{|k,v| v.nil? || non_sql_option?(k)} 1320 from = f.first 1321 from = from.expression if from.is_a?(SQL::AliasedExpression) 1322 1323 if SIMPLE_SELECT_ALL_ALLOWED_FROM.any?{|x| from.is_a?(x)} 1324 case o.length 1325 when 1 1326 true 1327 when 2 1328 (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll) 1329 else 1330 false 1331 end 1332 else 1333 false 1334 end 1335 end
2 - Methods that execute code on the database
Constants
ACTION_METHODS | = | (<<-METHS).split.map(&:to_sym).freeze |
Action methods defined by |
|
COLUMNS_CLONE_OPTIONS | = | {:distinct => nil, :limit => 0, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze |
The clone options to use when retrieving columns for a dataset. |
|
COUNT_SELECT | = | Sequel.function(:count).*.as(:count) | ||
EMPTY_SELECT | = | Sequel::SQL::AliasedExpression.new(1, :one) |
Public Instance Aliases
with_sql_update | -> | with_sql_delete |
Public Instance methods
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {id: 0, name: 'Zero'} << DB[:old_items].select(:id, name)
# File lib/sequel/dataset/actions.rb 28 def <<(arg) 29 insert(arg) 30 self 31 end
Returns the first record matching the conditions. Examples:
DB[:table][id: 1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=>1}
# File lib/sequel/dataset/actions.rb 37 def [](*conditions) 38 raise(Error, 'You cannot call Dataset#[] with an integer or with no arguments') if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 39 first(*conditions) 40 end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb 50 def all(&block) 51 _all(block){|a| each{|r| a << r}} 52 end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].as_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].as_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].as_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} DB[:table].as_hash([:id, :name]) # SELECT * FROM table # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob']=>{:id=>2, :name=>'Bob'}, ...}
Options:
:all |
Use all instead of each to retrieve the objects |
:hash |
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
# File lib/sequel/dataset/actions.rb 847 def as_hash(key_column, value_column = nil, opts = OPTS) 848 h = opts[:hash] || {} 849 meth = opts[:all] ? :all : :each 850 if value_column 851 return naked.as_hash(key_column, value_column, opts) if row_proc 852 if value_column.is_a?(Array) 853 if key_column.is_a?(Array) 854 public_send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} 855 else 856 public_send(meth){|r| h[r[key_column]] = r.values_at(*value_column)} 857 end 858 else 859 if key_column.is_a?(Array) 860 public_send(meth){|r| h[r.values_at(*key_column)] = r[value_column]} 861 else 862 public_send(meth){|r| h[r[key_column]] = r[value_column]} 863 end 864 end 865 elsif key_column.is_a?(Array) 866 public_send(meth){|r| h[key_column.map{|k| r[k]}] = r} 867 else 868 public_send(meth){|r| h[r[key_column]] = r} 869 end 870 h 871 end
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3 DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb 61 def avg(arg=(no_arg = true), &block) 62 arg = Sequel.virtual_row(&block) if no_arg 63 _aggregate(:avg, arg) 64 end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema
.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb 75 def columns 76 _columns || columns! 77 end
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
DB[:table].columns! # => [:id, :name]
# File lib/sequel/dataset/actions.rb 84 def columns! 85 ds = clone(COLUMNS_CLONE_OPTIONS) 86 ds.each{break} 87 88 if cols = ds.cache[:_columns] 89 self.columns = cols 90 else 91 [] 92 end 93 end
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1 # => 3 DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1 # => 2 DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb 108 def count(arg=(no_arg=true), &block) 109 if no_arg && !block 110 cached_dataset(:_count_ds) do 111 aggregate_dataset.select(COUNT_SELECT).single_value_ds 112 end.single_value!.to_i 113 else 114 if block 115 if no_arg 116 arg = Sequel.virtual_row(&block) 117 else 118 raise Error, 'cannot provide both argument and block to Dataset#count' 119 end 120 end 121 122 _aggregate(:count, arg) 123 end 124 end
Deletes the records in the dataset, returning the number of records deleted.
DB[:table].delete # DELETE * FROM table # => 3
Some databases support using multiple tables in a DELETE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>c[:h]}}. delete # DELETE FROM a # USING b # INNER JOIN c ON (c.d = b.e) # WHERE ((a.f = b.g) AND (a.id = c.h))
# File lib/sequel/dataset/actions.rb 142 def delete(&block) 143 sql = delete_sql 144 if uses_returning?(:delete) 145 returning_fetch_rows(sql, &block) 146 else 147 execute_dui(sql) 148 end 149 end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all
instead of each
for the outer queries, or use a separate thread or shard inside each
.
# File lib/sequel/dataset/actions.rb 160 def each 161 if rp = row_proc 162 fetch_rows(select_sql){|r| yield rp.call(r)} 163 else 164 fetch_rows(select_sql){|r| yield r} 165 end 166 self 167 end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb 175 def empty? 176 cached_dataset(:_empty_ds) do 177 (@opts[:sql] ? from_self : self).single_value_ds.unordered.select(EMPTY_SELECT) 178 end.single_value!.nil? 179 end
Returns the first matching record if no arguments are given. If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If any other type of argument(s) is passed, it is treated as a filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.
If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).
Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(id: 2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first(Sequel.lit("id = 3")) # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first(Sequel.lit("id = ?", 4)) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first(Sequel.lit("id > ?", 4)){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb 216 def first(*args, &block) 217 case args.length 218 when 0 219 unless block 220 return single_record 221 end 222 when 1 223 arg = args[0] 224 if arg.is_a?(Integer) 225 res = if block 226 if loader = cached_placeholder_literalizer(:_first_integer_cond_loader) do |pl| 227 where(pl.arg).limit(pl.arg) 228 end 229 230 loader.all(filter_expr(&block), arg) 231 else 232 where(&block).limit(arg).all 233 end 234 else 235 if loader = cached_placeholder_literalizer(:_first_integer_loader) do |pl| 236 limit(pl.arg) 237 end 238 239 loader.all(arg) 240 else 241 limit(arg).all 242 end 243 end 244 245 return res 246 end 247 where_args = args 248 args = arg 249 end 250 251 if loader = cached_where_placeholder_literalizer(where_args||args, block, :_first_cond_loader) do |pl| 252 _single_record_ds.where(pl.arg) 253 end 254 255 loader.first(filter_expr(args, &block)) 256 else 257 _single_record_ds.where(args, &block).single_record! 258 end 259 end
Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow
exception.
# File lib/sequel/dataset/actions.rb 263 def first!(*args, &block) 264 first(*args, &block) || raise(Sequel::NoMatchingRow.new(self)) 265 end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1 # => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel
can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # => [3, 'foo'] DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # => [6, 'foo']
# File lib/sequel/dataset/actions.rb 285 def get(column=(no_arg=true; nil), &block) 286 ds = naked 287 if block 288 raise(Error, 'Must call Dataset#get with an argument or a block, not both') unless no_arg 289 ds = ds.select(&block) 290 column = ds.opts[:select] 291 column = nil if column.is_a?(Array) && column.length < 2 292 else 293 case column 294 when Array 295 ds = ds.select(*column) 296 when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression 297 if loader = cached_placeholder_literalizer(:_get_loader) do |pl| 298 ds.single_value_ds.select(pl.arg) 299 end 300 301 return loader.get(column) 302 end 303 304 ds = ds.select(column) 305 else 306 if loader = cached_placeholder_literalizer(:_get_alias_loader) do |pl| 307 ds.single_value_ds.select(Sequel.as(pl.arg, :v)) 308 end 309 310 return loader.get(column) 311 end 312 313 ds = ds.select(Sequel.as(column, :v)) 314 end 315 end 316 317 if column.is_a?(Array) 318 if r = ds.single_record 319 r.values_at(*hash_key_symbols(column)) 320 end 321 else 322 ds.single_value 323 end 324 end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction if necessary.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
or, if the database supports it:
# INSERT INTO table (x, y) VALUES (1, 2), (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
:commit_every |
Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records. When a transaction is not required, this option controls the maximum number of values to insert with a single statement; it does not force the use of a transaction. |
:return |
When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted. This does not have an effect if |
:server |
Set the server/shard to use for the transaction and insert queries. |
:skip_transaction |
Do not use a transaction even when using multiple INSERT queries. |
:slice |
Same as :commit_every, :commit_every takes precedence. |
# File lib/sequel/dataset/actions.rb 362 def import(columns, values, opts=OPTS) 363 return insert(columns, values) if values.is_a?(Dataset) 364 365 return if values.empty? 366 raise(Error, 'Using Sequel::Dataset#import with an empty column array is not allowed') if columns.empty? 367 ds = opts[:server] ? server(opts[:server]) : self 368 369 if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice)) 370 offset = 0 371 rows = [] 372 while offset < values.length 373 rows << ds._import(columns, values[offset, slice_size], opts) 374 offset += slice_size 375 end 376 rows.flatten 377 else 378 ds._import(columns, values, opts) 379 end 380 end
Inserts values into the associated table. The returned value is generally the value of the autoincremented primary key for the inserted row, assuming that a single row is inserted and the table has an autoincrementing primary key.
insert
handles a number of different argument formats:
no arguments or single empty hash |
Uses |
single hash |
Most common format, treats keys as columns and values as values |
single array |
Treats entries as values, with no columns |
two arrays |
Treats first array as columns, second array as values |
single Dataset |
Treats as an insert based on a selection from the dataset given, with no columns |
array and dataset |
Treats as an insert based on a selection from the dataset given, with the columns given by the array. |
Examples:
DB[:items].insert # INSERT INTO items DEFAULT VALUES DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES DB[:items].insert([1,2,3]) # INSERT INTO items VALUES (1, 2, 3) DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(a: 1, b: 2) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(DB[:old_items]) # INSERT INTO items SELECT * FROM old_items DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb 418 def insert(*values, &block) 419 sql = insert_sql(*values) 420 if uses_returning?(:insert) 421 returning_fetch_rows(sql, &block) 422 else 423 execute_insert(sql) 424 end 425 end
Reverses the order and then runs first
with the given arguments and block. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error
.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb 437 def last(*args, &block) 438 raise(Error, 'No order specified') unless @opts[:order] 439 reverse.first(*args, &block) 440 end
Maps column values for each record in the dataset (if an argument is given) or performs the stock mapping functionality of Enumerable
otherwise. Raises an Error
if both an argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb 456 def map(column=nil, &block) 457 if column 458 raise(Error, 'Must call Dataset#map with either an argument or a block, not both') if block 459 return naked.map(column) if row_proc 460 if column.is_a?(Array) 461 super(){|r| r.values_at(*column)} 462 else 463 super(){|r| r[column]} 464 end 465 else 466 super(&block) 467 end 468 end
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10 DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb 477 def max(arg=(no_arg = true), &block) 478 arg = Sequel.virtual_row(&block) if no_arg 479 _aggregate(:max, arg) 480 end
Execute a MERGE statement, which allows for INSERT, UPDATE, and DELETE behavior in a single query, based on whether rows from a source table match rows in the current table, based on the join conditions.
Unless the dataset uses static SQL
, to use merge
, you must first have called merge_using
to specify the merge source and join conditions. You will then likely to call one or more of the following methods to specify MERGE behavior by adding WHEN [NOT] MATCHED clauses:
The WHEN [NOT] MATCHED clauses are added to the SQL
in the order these methods were called on the dataset. If none of these methods are called, an error is raised.
Example:
DB[:m1] merge_using(:m2, i1: :i2). merge_insert(i1: :i2, a: Sequel[:b]+11). merge_delete{a > 30}. merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20). merge
SQL:
MERGE INTO m1 USING m2 ON (i1 = i2) WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11)) WHEN MATCHED AND (a > 30) THEN DELETE WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20)
On PostgreSQL, two additional merge methods are supported, for the PostgreSQL-specific DO NOTHING syntax.
-
merge_do_nothing_when_matched
-
merge_do_nothing_when_not_matched
This method is supported on Oracle, but Oracle’s MERGE support is non-standard, and has the following issues:
-
DELETE clause requires UPDATE clause
-
DELETE clause requires a condition
-
DELETE clause only affects rows updated by UPDATE clause
# File lib/sequel/dataset/actions.rb 527 def merge 528 execute_ddl(merge_sql) 529 end
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1 DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # => 0
# File lib/sequel/dataset/actions.rb 538 def min(arg=(no_arg = true), &block) 539 arg = Sequel.virtual_row(&block) if no_arg 540 _aggregate(:min, arg) 541 end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{x: 1}, {x: 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as import
.
# File lib/sequel/dataset/actions.rb 555 def multi_insert(hashes, opts=OPTS) 556 return if hashes.empty? 557 columns = hashes.first.keys 558 import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 559 end
Yields each row in the dataset, but internally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.
Sequel
checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, use a separate thread or shard inside paged_each
.
Options:
:rows_per_fetch |
The number of rows to fetch per query. Defaults to 1000. |
:strategy |
The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some |
:filter_values |
If the strategy: :filter option is used, this option should be a proc that accepts the last retrieved row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases. |
:skip_transaction |
Do not use a transaction. This can be useful if you want to prevent a lock on the database table, at the expense of consistency. |
Examples:
DB[:table].order(:id).paged_each{|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000 # ... DB[:table].order(:id).paged_each(rows_per_fetch: 100){|row| } # SELECT * FROM table ORDER BY id LIMIT 100 # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100 # ... DB[:table].order(:id).paged_each(strategy: :filter){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ... DB[:table].order(:id).paged_each(strategy: :filter, filter_values: lambda{|row, exprs| [row[:id]]}){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ...
# File lib/sequel/dataset/actions.rb 618 def paged_each(opts=OPTS) 619 unless @opts[:order] 620 raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" 621 end 622 unless defined?(yield) 623 return enum_for(:paged_each, opts) 624 end 625 626 total_limit = @opts[:limit] 627 offset = @opts[:offset] 628 if server = @opts[:server] 629 opts = Hash[opts] 630 opts[:server] = server 631 end 632 633 rows_per_fetch = opts[:rows_per_fetch] || 1000 634 strategy = if offset || total_limit 635 :offset 636 else 637 opts[:strategy] || :offset 638 end 639 640 db.transaction(opts) do 641 case strategy 642 when :filter 643 filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}} 644 base_ds = ds = limit(rows_per_fetch) 645 while ds 646 last_row = nil 647 ds.each do |row| 648 last_row = row 649 yield row 650 end 651 ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row) 652 end 653 else 654 offset ||= 0 655 num_rows_yielded = rows_per_fetch 656 total_rows = 0 657 658 while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) 659 if total_limit && total_rows + rows_per_fetch > total_limit 660 rows_per_fetch = total_limit - total_rows 661 end 662 663 num_rows_yielded = 0 664 limit(rows_per_fetch, offset).each do |row| 665 num_rows_yielded += 1 666 total_rows += 1 if total_limit 667 yield row 668 end 669 670 offset += rows_per_fetch 671 end 672 end 673 end 674 675 self 676 end
Returns a hash with key_column values as keys and value_column values as values. Similar to as_hash
, but only selects the columns given. Like as_hash
, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT id, foo, name, bar FROM table # => {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel
can determine.
# File lib/sequel/dataset/actions.rb 696 def select_hash(key_column, value_column, opts = OPTS) 697 _select_hash(:as_hash, key_column, value_column, opts) 698 end
Returns a hash with key_column values as keys and an array of value_column values. Similar to to_hash_groups
, but only selects the columns given. Like to_hash_groups
, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT first, middle, last, id FROM table # => {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel
can determine.
# File lib/sequel/dataset/actions.rb 717 def select_hash_groups(key_column, value_column, opts = OPTS) 718 _select_hash(:to_hash_groups, key_column, value_column, opts) 719 end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error
if called with both an argument and a block.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel
can determine.
# File lib/sequel/dataset/actions.rb 740 def select_map(column=nil, &block) 741 _select_map(column, false, &block) 742 end
The same as select_map
, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel
can determine.
# File lib/sequel/dataset/actions.rb 759 def select_order_map(column=nil, &block) 760 _select_map(column, true, &block) 761 end
Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records. Users should probably use first
instead of this method. Example:
DB[:test].single_record # SELECT * FROM test LIMIT 1 # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb 769 def single_record 770 _single_record_ds.single_record! 771 end
Returns the first record in dataset, without limiting the dataset. Returns nil if the dataset has no records. Users should probably use first
instead of this method. This should only be used if you know the dataset is already limited to a single record. This method may be desirable to use for performance reasons, as it does not clone the receiver. Example:
DB[:test].single_record! # SELECT * FROM test # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb 781 def single_record! 782 with_sql_first(select_sql) 783 end
Returns the first value of the first record in the dataset. Returns nil if dataset is empty. Users should generally use get
instead of this method. Example:
DB[:test].single_value # SELECT * FROM test LIMIT 1 # => 'value'
# File lib/sequel/dataset/actions.rb 791 def single_value 792 single_value_ds.each do |r| 793 r.each{|_, v| return v} 794 end 795 nil 796 end
Returns the first value of the first record in the dataset, without limiting the dataset. Returns nil if the dataset is empty. Users should generally use get
instead of this method. Should not be used on graphed datasets or datasets that have row_procs that don’t return hashes. This method may be desirable to use for performance reasons, as it does not clone the receiver.
DB[:test].single_value! # SELECT * FROM test # => 'value'
# File lib/sequel/dataset/actions.rb 806 def single_value! 807 with_sql_single_value(select_sql) 808 end
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # => 55 DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb 817 def sum(arg=(no_arg = true), &block) 818 arg = Sequel.virtual_row(&block) if no_arg 819 _aggregate(:sum, arg) 820 end
Alias of as_hash
for backwards compatibility.
# File lib/sequel/dataset/actions.rb 874 def to_hash(*a) 875 as_hash(*a) 876 end
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} DB[:table].to_hash_groups(:name) # SELECT * FROM table # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
Options:
:all |
Use all instead of each to retrieve the objects |
:hash |
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
# File lib/sequel/dataset/actions.rb 902 def to_hash_groups(key_column, value_column = nil, opts = OPTS) 903 h = opts[:hash] || {} 904 meth = opts[:all] ? :all : :each 905 if value_column 906 return naked.to_hash_groups(key_column, value_column, opts) if row_proc 907 if value_column.is_a?(Array) 908 if key_column.is_a?(Array) 909 public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} 910 else 911 public_send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} 912 end 913 else 914 if key_column.is_a?(Array) 915 public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} 916 else 917 public_send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]} 918 end 919 end 920 elsif key_column.is_a?(Array) 921 public_send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r} 922 else 923 public_send(meth){|r| (h[r[key_column]] ||= []) << r} 924 end 925 h 926 end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb 932 def truncate 933 execute_ddl(truncate_sql) 934 end
Updates values for the dataset. The returned value is the number of rows updated. values
should be a hash where the keys are columns to set and values are the values to which to set the columns.
DB[:table].update(x: nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(x: Sequel[:x]+1, y: 0) # UPDATE table SET x = (x + 1), y = 0 # => 10
Some databases support using multiple tables in an UPDATE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>10}}. update(:f=>Sequel[:c][:h]) # UPDATE a # SET f = c.h # FROM b # INNER JOIN c ON (c.d = b.e) # WHERE ((a.f = b.g) AND (a.id = 10))
# File lib/sequel/dataset/actions.rb 958 def update(values=OPTS, &block) 959 sql = update_sql(values) 960 if uses_returning?(:update) 961 returning_fetch_rows(sql, &block) 962 else 963 execute_dui(sql) 964 end 965 end
Return an array of all rows matching the given filter condition, also yielding each row to the given block. Basically the same as where(cond).all(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_all(id: [1,2,3]) # SELECT * FROM table WHERE (id IN (1, 2, 3))
# File lib/sequel/dataset/actions.rb 973 def where_all(cond, &block) 974 if loader = _where_loader([cond], nil) 975 loader.all(filter_expr(cond), &block) 976 else 977 where(cond).all(&block) 978 end 979 end
Iterate over all rows matching the given filter condition, yielding each row to the given block. Basically the same as where(cond).each(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_each(id: [1,2,3]){|row| p row} # SELECT * FROM table WHERE (id IN (1, 2, 3))
# File lib/sequel/dataset/actions.rb 987 def where_each(cond, &block) 988 if loader = _where_loader([cond], nil) 989 loader.each(filter_expr(cond), &block) 990 else 991 where(cond).each(&block) 992 end 993 end
Filter the datasets using the given filter condition, then return a single value. This assumes that the dataset has already been setup to limit the selection to a single column. Basically the same as where(cond).single_value, except it can be optimized to not create an intermediate dataset.
DB[:table].select(:name).where_single_value(id: 1) # SELECT name FROM table WHERE (id = 1) LIMIT 1
# File lib/sequel/dataset/actions.rb 1002 def where_single_value(cond) 1003 if loader = cached_where_placeholder_literalizer([cond], nil, :_where_single_value_loader) do |pl| 1004 single_value_ds.where(pl.arg) 1005 end 1006 1007 loader.get(filter_expr(cond)) 1008 else 1009 where(cond).single_value 1010 end 1011 end
Run the given SQL
and return an array of all rows. If a block is given, each row is yielded to the block after all rows are loaded. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1015 def with_sql_all(sql, &block) 1016 _all(block){|a| with_sql_each(sql){|r| a << r}} 1017 end
Execute the given SQL
and return the number of rows deleted. This exists solely as an optimization, replacing with_sql
(sql).delete. It’s significantly faster as it does not require cloning the current dataset.
# File lib/sequel/dataset/actions.rb 1022 def with_sql_delete(sql) 1023 execute_dui(sql) 1024 end
Run the given SQL
and yield each returned row to the block.
# File lib/sequel/dataset/actions.rb 1028 def with_sql_each(sql) 1029 if rp = row_proc 1030 _with_sql_dataset.fetch_rows(sql){|r| yield rp.call(r)} 1031 else 1032 _with_sql_dataset.fetch_rows(sql){|r| yield r} 1033 end 1034 self 1035 end
Run the given SQL
and return the first row, or nil if no rows were returned. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1039 def with_sql_first(sql) 1040 with_sql_each(sql){|r| return r} 1041 nil 1042 end
Execute the given SQL
and (on most databases) return the primary key of the inserted row.
# File lib/sequel/dataset/actions.rb 1055 def with_sql_insert(sql) 1056 execute_insert(sql) 1057 end
Run the given SQL
and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL
given should select only a single value. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1047 def with_sql_single_value(sql) 1048 if r = with_sql_first(sql) 1049 r.each{|_, v| return v} 1050 end 1051 end
Protected Instance methods
Internals of import
. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql
and execute each statement it gives separately. A transaction is only used if there are multiple statements to execute.
# File lib/sequel/dataset/actions.rb 1065 def _import(columns, values, opts) 1066 trans_opts = Hash[opts] 1067 trans_opts[:server] = @opts[:server] 1068 if opts[:return] == :primary_key 1069 _import_transaction(values, trans_opts){values.map{|v| insert(columns, v)}} 1070 else 1071 stmts = multi_insert_sql(columns, values) 1072 _import_transaction(stmts, trans_opts){stmts.each{|st| execute_dui(st)}} 1073 end 1074 end
Return an array of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb 1077 def _select_map_multiple(ret_cols) 1078 map{|r| r.values_at(*ret_cols)} 1079 end
Returns an array of the first value in each row.
# File lib/sequel/dataset/actions.rb 1082 def _select_map_single 1083 k = nil 1084 map{|r| r[k||=r.keys.first]} 1085 end
A dataset for returning single values from the current dataset.
# File lib/sequel/dataset/actions.rb 1088 def single_value_ds 1089 clone(:limit=>1).ungraphed.naked 1090 end
3 - User Methods relating to SQL Creation
Public Instance methods
Returns an EXISTS
clause for the dataset as an SQL::PlaceholderLiteralString
.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb 14 def exists 15 SQL::PlaceholderLiteralString.new(EXISTS, [self], true) 16 end
Returns an INSERT SQL
query string. See insert
.
DB[:items].insert_sql(a: 1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb 22 def insert_sql(*values) 23 return static_sql(@opts[:sql]) if @opts[:sql] 24 25 check_insert_allowed! 26 27 columns, values = _parse_insert_sql_args(values) 28 if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? 29 columns, values = insert_empty_columns_values 30 elsif values.is_a?(Dataset) && hoist_cte?(values) && supports_cte?(:insert) 31 ds, values = hoist_cte(values) 32 return ds.clone(:columns=>columns, :values=>values).send(:_insert_sql) 33 end 34 clone(:columns=>columns, :values=>values).send(:_insert_sql) 35 end
Append a literal representation of a value to the given SQL
string.
If an unsupported object is given, an Error
is raised.
# File lib/sequel/dataset/sql.rb 40 def literal_append(sql, v) 41 case v 42 when Symbol 43 if skip_symbol_cache? 44 literal_symbol_append(sql, v) 45 else 46 unless l = db.literal_symbol(v) 47 l = String.new 48 literal_symbol_append(l, v) 49 db.literal_symbol_set(v, l) 50 end 51 sql << l 52 end 53 when String 54 case v 55 when LiteralString 56 sql << v 57 when SQL::Blob 58 literal_blob_append(sql, v) 59 else 60 literal_string_append(sql, v) 61 end 62 when Integer 63 sql << literal_integer(v) 64 when Hash 65 literal_hash_append(sql, v) 66 when SQL::Expression 67 literal_expression_append(sql, v) 68 when Float 69 sql << literal_float(v) 70 when BigDecimal 71 sql << literal_big_decimal(v) 72 when NilClass 73 sql << literal_nil 74 when TrueClass 75 sql << literal_true 76 when FalseClass 77 sql << literal_false 78 when Array 79 literal_array_append(sql, v) 80 when Time 81 v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v) 82 when DateTime 83 literal_datetime_append(sql, v) 84 when Date 85 sql << literal_date(v) 86 when Dataset 87 literal_dataset_append(sql, v) 88 else 89 literal_other_append(sql, v) 90 end 91 end
The SQL
to use for the MERGE statement.
# File lib/sequel/dataset/sql.rb 94 def merge_sql 95 raise Error, "This database doesn't support MERGE" unless supports_merge? 96 if sql = opts[:sql] 97 return static_sql(sql) 98 end 99 if sql = cache_get(:_merge_sql) 100 return sql 101 end 102 source, join_condition = @opts[:merge_using] 103 raise Error, "No USING clause for MERGE" unless source 104 sql = @opts[:append_sql] || sql_string_origin 105 106 select_with_sql(sql) 107 sql << "MERGE INTO " 108 source_list_append(sql, @opts[:from]) 109 sql << " USING " 110 identifier_append(sql, source) 111 sql << " ON " 112 literal_append(sql, join_condition) 113 _merge_when_sql(sql) 114 cache_set(:_merge_sql, sql) if cache_sql? 115 sql 116 end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert
to format insert statements and expects a keys array and and an array of value arrays.
# File lib/sequel/dataset/sql.rb 121 def multi_insert_sql(columns, values) 122 case multi_insert_sql_strategy 123 when :values 124 sql = LiteralString.new('VALUES ') 125 expression_list_append(sql, values.map{|r| Array(r)}) 126 [insert_sql(columns, sql)] 127 when :union 128 c = false 129 sql = LiteralString.new 130 u = ' UNION ALL SELECT ' 131 f = empty_from_sql 132 values.each do |v| 133 if c 134 sql << u 135 else 136 sql << 'SELECT ' 137 c = true 138 end 139 expression_list_append(sql, v) 140 sql << f if f 141 end 142 [insert_sql(columns, sql)] 143 else 144 values.map{|r| insert_sql(columns, r)} 145 end 146 end
Same as select_sql
, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb 149 def sql 150 select_sql 151 end
Returns a TRUNCATE SQL
query string. See truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb 156 def truncate_sql 157 if opts[:sql] 158 static_sql(opts[:sql]) 159 else 160 check_truncation_allowed! 161 check_not_limited!(:truncate) 162 raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] 163 t = String.new 164 source_list_append(t, opts[:from]) 165 _truncate_sql(t) 166 end 167 end
Formats an UPDATE statement using the given values. See update
.
DB[:items].update_sql(price: 100, category: 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error
if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb 176 def update_sql(values = OPTS) 177 return static_sql(opts[:sql]) if opts[:sql] 178 check_update_allowed! 179 check_not_limited!(:update) 180 181 case values 182 when LiteralString 183 # nothing 184 when String 185 raise Error, "plain string passed to Dataset#update is not supported, use Sequel.lit to use a literal string" 186 end 187 188 clone(:values=>values).send(:_update_sql) 189 end
4 - Methods that describe what the dataset supports
Public Instance methods
Whether this dataset will provide accurate number of rows matched for delete and update statements, true by default. Accurate in this case is the number of rows matched by the dataset’s filter.
# File lib/sequel/dataset/features.rb 19 def provides_accurate_rows_matched? 20 true 21 end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb 12 def quote_identifiers? 13 @opts.fetch(:quote_identifiers, true) 14 end
Whether you must use a column alias list for recursive CTEs, false by default.
# File lib/sequel/dataset/features.rb 24 def recursive_cte_requires_column_aliases? 25 false 26 end
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer), false by default.
# File lib/sequel/dataset/features.rb 36 def requires_placeholder_type_specifiers? 37 false 38 end
Whether the dataset requires SQL
standard datetimes. False by default, as most allow strings with ISO 8601 format.
# File lib/sequel/dataset/features.rb 30 def requires_sql_standard_datetimes? 31 false 32 end
Whether the dataset supports common table expressions, false by default. If given, type
can be :select, :insert, :update, or :delete, in which case it determines whether WITH is supported for the respective statement type.
# File lib/sequel/dataset/features.rb 43 def supports_cte?(type=:select) 44 false 45 end
Whether the dataset supports common table expressions in subqueries, false by default. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
# File lib/sequel/dataset/features.rb 50 def supports_cte_in_subqueries? 51 false 52 end
Whether deleting from joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 55 def supports_deleting_joins? 56 supports_modifying_joins? 57 end
Whether the database supports derived column lists (e.g. “table_expr AS table_alias(column_alias1, column_alias2, …)”), true by default.
# File lib/sequel/dataset/features.rb 62 def supports_derived_column_lists? 63 true 64 end
Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.
# File lib/sequel/dataset/features.rb 67 def supports_distinct_on? 68 false 69 end
Whether the dataset supports CUBE with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 72 def supports_group_cube? 73 false 74 end
Whether the dataset supports ROLLUP with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 77 def supports_group_rollup? 78 false 79 end
Whether the dataset supports GROUPING SETS with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 82 def supports_grouping_sets? 83 false 84 end
Whether this dataset supports the insert_select
method for returning all columns values directly from an insert query, false by default.
# File lib/sequel/dataset/features.rb 88 def supports_insert_select? 89 supports_returning?(:insert) 90 end
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
# File lib/sequel/dataset/features.rb 93 def supports_intersect_except? 94 true 95 end
Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default.
# File lib/sequel/dataset/features.rb 98 def supports_intersect_except_all? 99 true 100 end
Whether the dataset supports the IS TRUE syntax, true by default.
# File lib/sequel/dataset/features.rb 103 def supports_is_true? 104 true 105 end
Whether the dataset supports the JOIN table USING (column1, …) syntax, true by default. If false, support is emulated using JOIN table ON (table.column1 = other_table.column1).
# File lib/sequel/dataset/features.rb 109 def supports_join_using? 110 true 111 end
Whether the dataset supports LATERAL for subqueries in the FROM or JOIN clauses, false by default.
# File lib/sequel/dataset/features.rb 114 def supports_lateral_subqueries? 115 false 116 end
Whether the MERGE statement is supported, false by default.
# File lib/sequel/dataset/features.rb 129 def supports_merge? 130 false 131 end
Whether modifying joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 134 def supports_modifying_joins? 135 false 136 end
Whether the IN/NOT IN operators support multiple columns when an array of values is given, true by default.
# File lib/sequel/dataset/features.rb 140 def supports_multiple_column_in? 141 true 142 end
Whether the dataset supports skipping raising an error instead of waiting for locked rows when returning data, false by default.
# File lib/sequel/dataset/features.rb 124 def supports_nowait? 125 false 126 end
Whether the dataset supports or can fully emulate the DISTINCT ON clause, including respecting the ORDER BY clause, false by default.
# File lib/sequel/dataset/features.rb 151 def supports_ordered_distinct_on? 152 supports_distinct_on? 153 end
Whether placeholder literalizers are supported, true by default.
# File lib/sequel/dataset/features.rb 156 def supports_placeholder_literalizer? 157 true 158 end
Whether the dataset supports pattern matching by regular expressions, false by default.
# File lib/sequel/dataset/features.rb 161 def supports_regexp? 162 false 163 end
Whether the dataset supports REPLACE syntax, false by default.
# File lib/sequel/dataset/features.rb 166 def supports_replace? 167 false 168 end
Whether the RETURNING clause is supported for the given type of query, false by default. type
can be :insert, :update, or :delete.
# File lib/sequel/dataset/features.rb 172 def supports_returning?(type) 173 false 174 end
Whether the database supports SELECT *, column FROM table
, true by default.
# File lib/sequel/dataset/features.rb 182 def supports_select_all_and_column? 183 true 184 end
Whether the dataset supports skipping locked rows when returning data, false by default.
# File lib/sequel/dataset/features.rb 177 def supports_skip_locked? 178 false 179 end
Whether the dataset supports timezones in literal timestamps, false by default.
# File lib/sequel/dataset/features.rb 187 def supports_timestamp_timezones? 188 false 189 end
Whether the dataset supports fractional seconds in literal timestamps, true by default.
# File lib/sequel/dataset/features.rb 192 def supports_timestamp_usecs? 193 true 194 end
Whether updating joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 197 def supports_updating_joins? 198 supports_modifying_joins? 199 end
Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that that use 1 for true), true by default.
# File lib/sequel/dataset/features.rb 226 def supports_where_true? 227 true 228 end
Whether the dataset supports the WINDOW clause to define windows used by multiple window functions, false by default.
# File lib/sequel/dataset/features.rb 203 def supports_window_clause? 204 false 205 end
Whether the dataset supports the given window function option. True by default. This should only be called if supports_window_functions? is true. Possible options are :rows, :range, :groups, :offset, :exclude.
# File lib/sequel/dataset/features.rb 215 def supports_window_function_frame_option?(option) 216 case option 217 when :rows, :range, :offset 218 true 219 else 220 false 221 end 222 end
Whether the dataset supports window functions, false by default.
# File lib/sequel/dataset/features.rb 208 def supports_window_functions? 209 false 210 end
5 - Methods related to dataset graphing
Public Instance methods
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases
, which replaces the list (the equivalent of select_append
when graphing). See set_graph_aliases
.
DB[:table].add_graph_aliases(some_alias: [:table, :column]) # SELECT ..., table.column AS some_alias
# File lib/sequel/dataset/graph.rb 18 def add_graph_aliases(graph_aliases) 19 graph = opts[:graph] 20 unless (graph && (ga = graph[:column_aliases])) 21 raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases" 22 end 23 columns, graph_aliases = graph_alias_columns(graph_aliases) 24 select_append(*columns).clone(:graph => graph.merge(:column_aliases=>ga.merge(graph_aliases).freeze).freeze) 25 end
Similar to Dataset#join_table
, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.
Arguments:
dataset |
Can be a symbol (specifying a table), another dataset, or an |
join_conditions |
Any condition(s) allowed by |
block |
A block that is passed to |
Options:
:from_self_alias |
The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a |
:implicit_qualifier |
The qualifier of implicit conditions, see |
:join_only |
Only join the tables, do not change the selected columns. |
:join_type |
The type of join to use (passed to |
:qualify |
The type of qualification to do, see |
:select |
An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use |
:table_alias |
The alias to use for the table. If not specified, doesn’t alias the table. You will get an error if the alias (or table) name is used more than once. |
# File lib/sequel/dataset/graph.rb 53 def graph(dataset, join_conditions = nil, options = OPTS, &block) 54 # Allow the use of a dataset or symbol as the first argument 55 # Find the table name/dataset based on the argument 56 table_alias = options[:table_alias] 57 table = dataset 58 create_dataset = true 59 60 case dataset 61 when Symbol 62 # let alias be the same as the table name (sans any optional schema) 63 # unless alias explicitly given in the symbol using ___ notation and symbol splitting is enabled 64 table_alias ||= split_symbol(table).compact.last 65 when Dataset 66 if dataset.simple_select_all? 67 table = dataset.opts[:from].first 68 table_alias ||= table 69 else 70 table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 71 end 72 create_dataset = false 73 when SQL::Identifier 74 table_alias ||= table.value 75 when SQL::QualifiedIdentifier 76 table_alias ||= split_qualifiers(table).last 77 when SQL::AliasedExpression 78 return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge!(options), &block) 79 else 80 raise Error, "The dataset argument should be a symbol or dataset" 81 end 82 table_alias = table_alias.to_sym 83 84 if create_dataset 85 dataset = db.from(table) 86 end 87 88 # Raise Sequel::Error with explanation that the table alias has been used 89 raise_alias_error = lambda do 90 raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 91 "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 92 end 93 94 # Only allow table aliases that haven't been used 95 raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 96 97 table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table) 98 implicit_qualifier = options[:implicit_qualifier] 99 joined_dataset = joined_dataset? 100 ds = self 101 graph = opts[:graph] 102 103 if !graph && (select = @opts[:select]) && !select.empty? 104 select_columns = nil 105 106 unless !joined_dataset && select.length == 1 && (select[0].is_a?(SQL::ColumnAll)) 107 force_from_self = false 108 select_columns = select.map do |sel| 109 unless col = _hash_key_symbol(sel) 110 force_from_self = true 111 break 112 end 113 114 [sel, col] 115 end 116 117 select_columns = nil if force_from_self 118 end 119 end 120 121 # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs) 122 if (@opts[:graph_from_self] != false && !graph && (joined_dataset || force_from_self)) 123 from_selfed = true 124 implicit_qualifier = options[:from_self_alias] || first_source 125 ds = ds.from_self(:alias=>implicit_qualifier) 126 end 127 128 # Join the table early in order to avoid cloning the dataset twice 129 ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>options[:qualify], &block) 130 131 return ds if options[:join_only] 132 133 opts = ds.opts 134 135 # Whether to include the table in the result set 136 add_table = options[:select] == false ? false : true 137 138 if graph 139 graph = graph.dup 140 select = opts[:select].dup 141 [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup} 142 else 143 # Setup the initial graph data structure if it doesn't exist 144 qualifier = ds.first_source_alias 145 master = alias_symbol(qualifier) 146 raise_alias_error.call if master == table_alias 147 148 # Master hash storing all .graph related information 149 graph = {} 150 151 # Associates column aliases back to tables and columns 152 column_aliases = graph[:column_aliases] = {} 153 154 # Associates table alias (the master is never aliased) 155 table_aliases = graph[:table_aliases] = {master=>self} 156 157 # Keep track of the alias numbers used 158 ca_num = graph[:column_alias_num] = Hash.new(0) 159 160 select = if select_columns 161 select_columns.map do |sel, column| 162 column_aliases[column] = [master, column] 163 if from_selfed 164 # Initial dataset was wrapped in subselect, selected all 165 # columns in the subselect, qualified by the subselect alias. 166 Sequel.qualify(qualifier, Sequel.identifier(column)) 167 else 168 # Initial dataset not wrapped in subslect, just make 169 # sure columns are qualified in some way. 170 qualified_expression(sel, qualifier) 171 end 172 end 173 else 174 columns.map do |column| 175 column_aliases[column] = [master, column] 176 SQL::QualifiedIdentifier.new(qualifier, column) 177 end 178 end 179 end 180 181 # Add the table alias to the list of aliases 182 # Even if it isn't been used in the result set, 183 # we add a key for it with a nil value so we can check if it 184 # is used more than once 185 table_aliases = graph[:table_aliases] 186 table_aliases[table_alias] = add_table ? dataset : nil 187 188 # Add the columns to the selection unless we are ignoring them 189 if add_table 190 column_aliases = graph[:column_aliases] 191 ca_num = graph[:column_alias_num] 192 # Which columns to add to the result set 193 cols = options[:select] || dataset.columns 194 # If the column hasn't been used yet, don't alias it. 195 # If it has been used, try table_column. 196 # If that has been used, try table_column_N 197 # using the next value of N that we know hasn't been 198 # used 199 cols.each do |column| 200 col_alias, identifier = if column_aliases[column] 201 column_alias = :"#{table_alias}_#{column}" 202 if column_aliases[column_alias] 203 column_alias_num = ca_num[column_alias] 204 column_alias = :"#{column_alias}_#{column_alias_num}" 205 ca_num[column_alias] += 1 206 end 207 [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)] 208 else 209 ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column) 210 [column, ident] 211 end 212 column_aliases[col_alias] = [table_alias, column].freeze 213 select.push(identifier) 214 end 215 end 216 [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k].freeze} 217 ds = ds.clone(:graph=>graph.freeze) 218 ds.select(*select) 219 end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select
for a graphed dataset, and must be used instead of select
whenever graphing is used.
graph_aliases should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name.
DB[:artists].graph(:albums, :artist_id: :id). set_graph_aliases(name: :artists, album_name: [:albums, :name], forty_two: [:albums, :fourtwo, 42]).first # SELECT artists.name, albums.name AS album_name, 42 AS forty_two ...
# File lib/sequel/dataset/graph.rb 244 def set_graph_aliases(graph_aliases) 245 columns, graph_aliases = graph_alias_columns(graph_aliases) 246 if graph = opts[:graph] 247 select(*columns).clone(:graph => graph.merge(:column_aliases=>graph_aliases.freeze).freeze) 248 else 249 raise Error, "cannot call #set_graph_aliases on an ungraphed dataset" 250 end 251 end
Remove the splitting of results into subhashes, and all metadata related to the current graph (if any).
# File lib/sequel/dataset/graph.rb 255 def ungraphed 256 clone(:graph=>nil) 257 end
6 - Miscellaneous methods
Attributes
cache | [R] |
Access the cache for the current dataset. Should be used with caution, as access to the cache is not thread safe without a mutex if other threads can reference the dataset. Symbol keys prefixed with an underscore are reserved for internal use. |
db | [R] |
The database related to this dataset. This is the |
opts | [R] |
The hash of options for this dataset, keys are symbols. |
Public Class methods
Constructs a new Dataset
instance with an associated database and options. Datasets are usually constructed by invoking the Database#[]
method:
DB[:posts]
Sequel::Dataset
is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset
, and has the Database#dataset
method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb 25 def initialize(db) 26 @db = db 27 @opts = OPTS 28 @cache = {} 29 freeze 30 end
Public Instance methods
Define a hash value such that datasets with the same class, DB, and opts will be considered equal.
# File lib/sequel/dataset/misc.rb 34 def ==(o) 35 o.is_a?(self.class) && db == o.db && opts == o.opts 36 end
An object representing the current date or time, should be an instance of Sequel.datetime_class.
# File lib/sequel/dataset/misc.rb 40 def current_datetime 41 Sequel.datetime_class.now 42 end
Return self, as datasets are always frozen.
# File lib/sequel/dataset/misc.rb 50 def dup 51 self 52 end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(key: 'setting').each_server{|ds| ds.update(value: 'new_value')}
# File lib/sequel/dataset/misc.rb 59 def each_server 60 db.servers.each{|s| yield server(s)} 61 end
Alias for ==
# File lib/sequel/dataset/misc.rb 45 def eql?(o) 46 self == o 47 end
Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:
ds.escape_like("foo\\%_") # 'foo\\\%\_'
# File lib/sequel/dataset/misc.rb 68 def escape_like(string) 69 string.gsub(/[\\%_]/){|m| "\\#{m}"} 70 end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb 91 def first_source 92 first_source_alias 93 end
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an Error
. If the table is aliased, returns the aliased name.
DB[:table].first_source_alias # => :table DB[Sequel[:table].as(:t)].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb 103 def first_source_alias 104 source = @opts[:from] 105 if source.nil? || source.empty? 106 raise Error, 'No source specified for query' 107 end 108 case s = source.first 109 when SQL::AliasedExpression 110 s.alias 111 when Symbol 112 _, _, aliaz = split_symbol(s) 113 aliaz ? aliaz.to_sym : s 114 else 115 s 116 end 117 end
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table # => :table DB[Sequel[:table].as(:t)].first_source_table # => :table
# File lib/sequel/dataset/misc.rb 128 def first_source_table 129 source = @opts[:from] 130 if source.nil? || source.empty? 131 raise Error, 'No source specified for query' 132 end 133 case s = source.first 134 when SQL::AliasedExpression 135 s.expression 136 when Symbol 137 sch, table, aliaz = split_symbol(s) 138 aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s 139 else 140 s 141 end 142 end
Freeze the opts when freezing the dataset.
# File lib/sequel/dataset/misc.rb 74 def freeze 75 @opts.freeze 76 super 77 end
Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.
# File lib/sequel/dataset/misc.rb 146 def hash 147 [self.class, db, opts].hash 148 end
Returns a string representation of the dataset including the class name and the corresponding SQL
select statement.
# File lib/sequel/dataset/misc.rb 152 def inspect 153 "#<#{visible_class_name}: #{sql.inspect}>" 154 end
Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
# File lib/sequel/dataset/misc.rb 157 def joined_dataset? 158 !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]) 159 end
The class to use for placeholder literalizers for the current dataset.
# File lib/sequel/dataset/misc.rb 162 def placeholder_literalizer_class 163 ::Sequel::Dataset::PlaceholderLiteralizer 164 end
A placeholder literalizer loader for the current dataset.
# File lib/sequel/dataset/misc.rb 167 def placeholder_literalizer_loader(&block) 168 placeholder_literalizer_class.loader(self, &block) 169 end
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies
# File lib/sequel/dataset/misc.rb 173 def row_number_column 174 :x_sequel_row_number_x 175 end
The row_proc
for this database, should be any object that responds to call
with a single hash argument and returns the object you want each
to return.
# File lib/sequel/dataset/misc.rb 179 def row_proc 180 @opts[:row_proc] 181 end
Splits a possible implicit alias in c
, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb 186 def split_alias(c) 187 case c 188 when Symbol 189 c_table, column, aliaz = split_symbol(c) 190 [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] 191 when SQL::AliasedExpression 192 [c.expression, c.alias] 193 when SQL::JoinClause 194 [c.table, c.table_alias] 195 else 196 [c, nil] 197 end 198 end
This returns an SQL::Identifier
or SQL::AliasedExpression
containing an SQL
identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier
, SQL::QualifiedIdentifier
, or SQL::AliasedExpression
containing one of those. In other cases, this returns nil.
# File lib/sequel/dataset/misc.rb 205 def unqualified_column_for(v) 206 unless v.is_a?(String) 207 _unqualified_column_for(v) 208 end 209 end
Creates a unique table alias that hasn’t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with “_N” if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1 DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # => :table_3
# File lib/sequel/dataset/misc.rb 233 def unused_table_alias(table_alias, used_aliases = []) 234 table_alias = alias_symbol(table_alias) 235 used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] 236 used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] 237 if used_aliases.include?(table_alias) 238 i = 0 239 while true 240 ta = :"#{table_alias}_#{i}" 241 return ta unless used_aliases.include?(ta) 242 i += 1 243 end 244 else 245 table_alias 246 end 247 end
Return a modified dataset with quote_identifiers set.
# File lib/sequel/dataset/misc.rb 250 def with_quote_identifiers(v) 251 clone(:quote_identifiers=>v, :skip_symbol_cache=>true) 252 end
Protected Instance methods
The cached columns for the current dataset.
# File lib/sequel/dataset/misc.rb 281 def _columns 282 cache_get(:_columns) 283 end
Retreive a value from the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb 263 def cache_get(k) 264 Sequel.synchronize{@cache[k]} 265 end
Set a value in the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb 268 def cache_set(k, v) 269 Sequel.synchronize{@cache[k] = v} 270 end
Clear the columns hash for the current dataset. This is not a thread safe operation, so it should only be used if the dataset could not be used by another thread (such as one that was just created via clone).
# File lib/sequel/dataset/misc.rb 276 def clear_columns_cache 277 @cache.delete(:_columns) 278 end
8 - Methods related to prepared statements or bound variables
Constants
DEFAULT_PREPARED_STATEMENT_MODULE_METHODS | = | %w'execute execute_dui execute_insert'.freeze.each(&:freeze) | ||
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze | ||
PREPARED_STATEMENT_MODULE_CODE | = | { :bind => "opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze, :prepare => "sql = prepared_statement_name".freeze, :prepare_bind => "sql = prepared_statement_name; opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze }.freeze |
Public Instance methods
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
DB[:table].where(id: :$id).bind(id: 1).call(:first) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb 332 def bind(bind_vars=OPTS) 333 bind_vars = if bv = @opts[:bind_vars] 334 bv.merge(bind_vars).freeze 335 else 336 if bind_vars.frozen? 337 bind_vars 338 else 339 Hash[bind_vars] 340 end 341 end 342 343 clone(:bind_vars=>bind_vars) 344 end
For the given type (:select, :first, :insert, :insert_select, :update, :delete, or :single_value), run the sql with the bind variables specified in the hash. values
is a hash passed to insert or update (if one of those types is used), which may contain placeholders.
DB[:table].where(id: :$id).call(:first, id: 1) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb 353 def call(type, bind_variables=OPTS, *values, &block) 354 to_prepared_statement(type, values, :extend=>bound_variable_modules).call(bind_variables, &block) 355 end
Prepare an SQL
statement for later execution. Takes a type similar to call
, and the name
symbol of the prepared statement.
This returns a clone of the dataset extended with PreparedStatementMethods
, which you can call
with the hash of bind variables to use. The prepared statement is also stored in the associated Database
, where it can be called by name. The following usage is identical:
ps = DB[:table].where(name: :$name).prepare(:first, :select_by_name) ps.call(name: 'Blah') # SELECT * FROM table WHERE name = ? -- ('Blah') # => {:id=>1, :name=>'Blah'} DB.call(:select_by_name, name: 'Blah') # Same thing
# File lib/sequel/dataset/prepared_statements.rb 373 def prepare(type, name, *values) 374 ps = to_prepared_statement(type, values, :name=>name, :extend=>prepared_statement_modules, :no_delayed_evaluations=>true) 375 376 ps = if ps.send(:emulate_prepared_statements?) 377 ps = ps.with_extend(EmulatePreparedStatementMethods) 378 ps.send(:emulated_prepared_statement, type, name, values) 379 else 380 sql = ps.prepared_sql 381 ps.prepared_args.freeze 382 ps.clone(:prepared_sql=>sql, :sql=>sql) 383 end 384 385 db.set_prepared_statement(name, ps) 386 ps 387 end
Protected Instance methods
Return a cloned copy of the current dataset extended with PreparedStatementMethods
, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb 393 def to_prepared_statement(type, values=nil, opts=OPTS) 394 mods = opts[:extend] || [] 395 mods += [PreparedStatementMethods] 396 397 bind. 398 clone(:prepared_statement_name=>opts[:name], :prepared_type=>type, :prepared_modify_values=>values, :orig_dataset=>self, :no_cache_sql=>true, :prepared_args=>@opts[:prepared_args]||[], :no_delayed_evaluations=>opts[:no_delayed_evaluations]). 399 with_extend(*mods) 400 end
9 - Internal Methods relating to SQL Creation
Constants
BITWISE_METHOD_MAP | = | {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}.freeze | ||
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :offset, :compounds].freeze | ||
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, WILDCARD).as(:count) | ||
DEFAULT | = | LiteralString.new('DEFAULT').freeze | ||
EXISTS | = | ['EXISTS '.freeze].freeze | ||
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze | ||
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS | ||
LIKE_OPERATORS | = | ::Sequel::SQL::ComplexExpression::LIKE_OPERATORS | ||
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS | ||
QUALIFY_KEYS | = | [:select, :where, :having, :order, :group].freeze | ||
REGEXP_OPERATORS | = | ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS | ||
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS | ||
WILDCARD | = | LiteralString.new('*').freeze |
Public Class methods
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL
string.
# File lib/sequel/dataset/sql.rb 198 def self.clause_methods(type, clauses) 199 clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze 200 end
Define a dataset literalization method for the given type in the given module, using the given clauses.
Arguments:
mod |
Module in which to define method |
type |
Type of |
clauses |
array of clauses that make up the |
# File lib/sequel/dataset/sql.rb 212 def self.def_sql_method(mod, type, clauses) 213 priv = type == :update || type == :insert 214 cacheable = type == :select || type == :delete 215 216 lines = [] 217 lines << 'private' if priv 218 lines << "def #{'_' if priv}#{type}_sql" 219 lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv 220 lines << "if sql = cache_get(:_#{type}_sql); return sql end" if cacheable 221 lines << 'check_delete_allowed!' << 'check_not_limited!(:delete)' if type == :delete 222 lines << 'sql = @opts[:append_sql] || sql_string_origin' 223 224 if clauses.all?{|c| c.is_a?(Array)} 225 clauses.each do |i, cs| 226 lines << i 227 lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) 228 end 229 lines << 'end' 230 else 231 lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"}) 232 end 233 234 lines << "cache_set(:_#{type}_sql, sql) if cache_sql?" if cacheable 235 lines << 'sql' 236 lines << 'end' 237 238 mod.class_eval lines.join("\n"), __FILE__, __LINE__ 239 end
Public Instance methods
Append literalization of aliased expression to SQL
string.
# File lib/sequel/dataset/sql.rb 273 def aliased_expression_sql_append(sql, ae) 274 literal_append(sql, ae.expression) 275 as_sql_append(sql, ae.alias, ae.columns) 276 end
Append literalization of array to SQL
string.
# File lib/sequel/dataset/sql.rb 279 def array_sql_append(sql, a) 280 if a.empty? 281 sql << '(NULL)' 282 else 283 sql << '(' 284 expression_list_append(sql, a) 285 sql << ')' 286 end 287 end
Append literalization of boolean constant to SQL
string.
# File lib/sequel/dataset/sql.rb 290 def boolean_constant_sql_append(sql, constant) 291 if (constant == true || constant == false) && !supports_where_true? 292 sql << (constant == true ? '(1 = 1)' : '(1 = 0)') 293 else 294 literal_append(sql, constant) 295 end 296 end
Append literalization of case expression to SQL
string.
# File lib/sequel/dataset/sql.rb 299 def case_expression_sql_append(sql, ce) 300 sql << '(CASE' 301 if ce.expression? 302 sql << ' ' 303 literal_append(sql, ce.expression) 304 end 305 w = " WHEN " 306 t = " THEN " 307 ce.conditions.each do |c,r| 308 sql << w 309 literal_append(sql, c) 310 sql << t 311 literal_append(sql, r) 312 end 313 sql << " ELSE " 314 literal_append(sql, ce.default) 315 sql << " END)" 316 end
Append literalization of cast expression to SQL
string.
# File lib/sequel/dataset/sql.rb 319 def cast_sql_append(sql, expr, type) 320 sql << 'CAST(' 321 literal_append(sql, expr) 322 sql << ' AS ' << db.cast_type_literal(type).to_s 323 sql << ')' 324 end
Append literalization of column all selection to SQL
string.
# File lib/sequel/dataset/sql.rb 327 def column_all_sql_append(sql, ca) 328 qualified_identifier_sql_append(sql, ca.table, WILDCARD) 329 end
Append literalization of complex expression to SQL
string.
# File lib/sequel/dataset/sql.rb 332 def complex_expression_sql_append(sql, op, args) 333 case op 334 when *IS_OPERATORS 335 r = args[1] 336 if r.nil? || supports_is_true? 337 raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] 338 sql << '(' 339 literal_append(sql, args[0]) 340 sql << ' ' << op.to_s << ' ' 341 sql << val << ')' 342 elsif op == :IS 343 complex_expression_sql_append(sql, :"=", args) 344 else 345 complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args[0], nil)]) 346 end 347 when :IN, :"NOT IN" 348 cols = args[0] 349 vals = args[1] 350 col_array = true if cols.is_a?(Array) 351 if vals.is_a?(Array) 352 val_array = true 353 empty_val_array = vals == [] 354 end 355 if empty_val_array 356 literal_append(sql, empty_array_value(op, cols)) 357 elsif col_array 358 if !supports_multiple_column_in? 359 if val_array 360 expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) 361 literal_append(sql, op == :IN ? expr : ~expr) 362 else 363 old_vals = vals 364 vals = vals.naked if vals.is_a?(Sequel::Dataset) 365 vals = vals.to_a 366 val_cols = old_vals.columns 367 complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) 368 end 369 else 370 # If the columns and values are both arrays, use array_sql instead of 371 # literal so that if values is an array of two element arrays, it 372 # will be treated as a value list instead of a condition specifier. 373 sql << '(' 374 literal_append(sql, cols) 375 sql << ' ' << op.to_s << ' ' 376 if val_array 377 array_sql_append(sql, vals) 378 else 379 literal_append(sql, vals) 380 end 381 sql << ')' 382 end 383 else 384 sql << '(' 385 literal_append(sql, cols) 386 sql << ' ' << op.to_s << ' ' 387 literal_append(sql, vals) 388 sql << ')' 389 end 390 when :LIKE, :'NOT LIKE' 391 sql << '(' 392 literal_append(sql, args[0]) 393 sql << ' ' << op.to_s << ' ' 394 literal_append(sql, args[1]) 395 if requires_like_escape? 396 sql << " ESCAPE " 397 literal_append(sql, "\\") 398 end 399 sql << ')' 400 when :ILIKE, :'NOT ILIKE' 401 complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) 402 when :** 403 function_sql_append(sql, Sequel.function(:power, *args)) 404 when *TWO_ARITY_OPERATORS 405 if REGEXP_OPERATORS.include?(op) && !supports_regexp? 406 raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" 407 end 408 sql << '(' 409 literal_append(sql, args[0]) 410 sql << ' ' << op.to_s << ' ' 411 literal_append(sql, args[1]) 412 sql << ')' 413 when *N_ARITY_OPERATORS 414 sql << '(' 415 c = false 416 op_str = " #{op} " 417 args.each do |a| 418 sql << op_str if c 419 literal_append(sql, a) 420 c ||= true 421 end 422 sql << ')' 423 when :NOT 424 sql << 'NOT ' 425 literal_append(sql, args[0]) 426 when :NOOP 427 literal_append(sql, args[0]) 428 when :'B~' 429 sql << '~' 430 literal_append(sql, args[0]) 431 when :extract 432 sql << 'extract(' << args[0].to_s << ' FROM ' 433 literal_append(sql, args[1]) 434 sql << ')' 435 else 436 raise(InvalidOperation, "invalid operator #{op}") 437 end 438 end
Append literalization of constant to SQL
string.
# File lib/sequel/dataset/sql.rb 441 def constant_sql_append(sql, constant) 442 sql << constant.to_s 443 end
Append literalization of delayed evaluation to SQL
string, causing the delayed evaluation proc to be evaluated.
# File lib/sequel/dataset/sql.rb 447 def delayed_evaluation_sql_append(sql, delay) 448 # Delayed evaluations are used specifically so the SQL 449 # can differ in subsequent calls, so we definitely don't 450 # want to cache the sql in this case. 451 disable_sql_caching! 452 453 if recorder = @opts[:placeholder_literalizer] 454 recorder.use(sql, lambda{delay.call(self)}, nil) 455 else 456 literal_append(sql, delay.call(self)) 457 end 458 end
Append literalization of function call to SQL
string.
# File lib/sequel/dataset/sql.rb 461 def function_sql_append(sql, f) 462 name = f.name 463 opts = f.opts 464 465 if opts[:emulate] 466 if emulate_function?(name) 467 emulate_function_sql_append(sql, f) 468 return 469 end 470 471 name = native_function_name(name) 472 end 473 474 sql << 'LATERAL ' if opts[:lateral] 475 476 case name 477 when SQL::Identifier 478 if supports_quoted_function_names? && opts[:quoted] 479 literal_append(sql, name) 480 else 481 sql << name.value.to_s 482 end 483 when SQL::QualifiedIdentifier 484 if supports_quoted_function_names? && opts[:quoted] != false 485 literal_append(sql, name) 486 else 487 sql << split_qualifiers(name).join('.') 488 end 489 else 490 if supports_quoted_function_names? && opts[:quoted] 491 quote_identifier_append(sql, name) 492 else 493 sql << name.to_s 494 end 495 end 496 497 sql << '(' 498 if filter = opts[:filter] 499 filter = filter_expr(filter, &opts[:filter_block]) 500 end 501 if opts[:*] 502 if filter && !supports_filtered_aggregates? 503 literal_append(sql, Sequel.case({filter=>1}, nil)) 504 filter = nil 505 else 506 sql << '*' 507 end 508 else 509 sql << "DISTINCT " if opts[:distinct] 510 if filter && !supports_filtered_aggregates? 511 expression_list_append(sql, f.args.map{|arg| Sequel.case({filter=>arg}, nil)}) 512 filter = nil 513 else 514 expression_list_append(sql, f.args) 515 end 516 if order = opts[:order] 517 sql << " ORDER BY " 518 expression_list_append(sql, order) 519 end 520 end 521 sql << ')' 522 523 if group = opts[:within_group] 524 sql << " WITHIN GROUP (ORDER BY " 525 expression_list_append(sql, group) 526 sql << ')' 527 end 528 529 if filter 530 sql << " FILTER (WHERE " 531 literal_append(sql, filter) 532 sql << ')' 533 end 534 535 if window = opts[:over] 536 sql << ' OVER ' 537 window_sql_append(sql, window.opts) 538 end 539 540 if opts[:with_ordinality] 541 sql << " WITH ORDINALITY" 542 end 543 end
Append literalization of JOIN clause without ON or USING to SQL
string.
# File lib/sequel/dataset/sql.rb 546 def join_clause_sql_append(sql, jc) 547 table = jc.table 548 table_alias = jc.table_alias 549 table_alias = nil if table == table_alias && !jc.column_aliases 550 sql << ' ' << join_type_sql(jc.join_type) << ' ' 551 identifier_append(sql, table) 552 as_sql_append(sql, table_alias, jc.column_aliases) if table_alias 553 end
Append literalization of JOIN ON clause to SQL
string.
# File lib/sequel/dataset/sql.rb 556 def join_on_clause_sql_append(sql, jc) 557 join_clause_sql_append(sql, jc) 558 sql << ' ON ' 559 literal_append(sql, filter_expr(jc.on)) 560 end
Append literalization of JOIN USING clause to SQL
string.
# File lib/sequel/dataset/sql.rb 563 def join_using_clause_sql_append(sql, jc) 564 join_clause_sql_append(sql, jc) 565 join_using_clause_using_sql_append(sql, jc.using) 566 end
Append literalization of negative boolean constant to SQL
string.
# File lib/sequel/dataset/sql.rb 569 def negative_boolean_constant_sql_append(sql, constant) 570 sql << 'NOT ' 571 boolean_constant_sql_append(sql, constant) 572 end
Append literalization of ordered expression to SQL
string.
# File lib/sequel/dataset/sql.rb 575 def ordered_expression_sql_append(sql, oe) 576 if emulate = requires_emulating_nulls_first? 577 case oe.nulls 578 when :first 579 null_order = 0 580 when :last 581 null_order = 2 582 end 583 584 if null_order 585 literal_append(sql, Sequel.case({{oe.expression=>nil}=>null_order}, 1)) 586 sql << ", " 587 end 588 end 589 590 literal_append(sql, oe.expression) 591 sql << (oe.descending ? ' DESC' : ' ASC') 592 593 unless emulate 594 case oe.nulls 595 when :first 596 sql << " NULLS FIRST" 597 when :last 598 sql << " NULLS LAST" 599 end 600 end 601 end
Append literalization of placeholder literal string to SQL
string.
# File lib/sequel/dataset/sql.rb 604 def placeholder_literal_string_sql_append(sql, pls) 605 args = pls.args 606 str = pls.str 607 sql << '(' if pls.parens 608 if args.is_a?(Hash) 609 if args.empty? 610 sql << str 611 else 612 re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ 613 while true 614 previous, q, str = str.partition(re) 615 sql << previous 616 literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? 617 break if str.empty? 618 end 619 end 620 elsif str.is_a?(Array) 621 len = args.length 622 str.each_with_index do |s, i| 623 sql << s 624 literal_append(sql, args[i]) unless i == len 625 end 626 unless str.length == args.length || str.length == args.length + 1 627 raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" 628 end 629 else 630 i = -1 631 match_len = args.length - 1 632 while true 633 previous, q, str = str.partition('?') 634 sql << previous 635 literal_append(sql, args.at(i+=1)) unless q.empty? 636 if str.empty? 637 unless i == match_len 638 raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder string" 639 end 640 break 641 end 642 end 643 end 644 sql << ')' if pls.parens 645 end
Append literalization of qualified identifier to SQL
string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier
.
# File lib/sequel/dataset/sql.rb 650 def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) 651 identifier_append(sql, table) 652 sql << '.' 653 identifier_append(sql, column) 654 end
Append literalization of unqualified identifier to SQL
string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb 660 def quote_identifier_append(sql, name) 661 if name.is_a?(LiteralString) 662 sql << name 663 else 664 name = name.value if name.is_a?(SQL::Identifier) 665 name = input_identifier(name) 666 if quote_identifiers? 667 quoted_identifier_append(sql, name) 668 else 669 sql << name 670 end 671 end 672 end
Append literalization of identifier or unqualified identifier to SQL
string.
# File lib/sequel/dataset/sql.rb 675 def quote_schema_table_append(sql, table) 676 schema, table = schema_and_table(table) 677 if schema 678 quote_identifier_append(sql, schema) 679 sql << '.' 680 end 681 quote_identifier_append(sql, table) 682 end
Append literalization of quoted identifier to SQL
string. This method quotes the given name with the SQL
standard double quote. should be overridden by subclasses to provide quoting not matching the SQL
standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb 688 def quoted_identifier_append(sql, name) 689 sql << '"' << name.to_s.gsub('"', '""') << '"' 690 end
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL
Server).
# File lib/sequel/dataset/sql.rb 699 def schema_and_table(table_name, sch=nil) 700 sch = sch.to_s if sch 701 case table_name 702 when Symbol 703 s, t, _ = split_symbol(table_name) 704 [s||sch, t] 705 when SQL::QualifiedIdentifier 706 [table_name.table.to_s, table_name.column.to_s] 707 when SQL::Identifier 708 [sch, table_name.value.to_s] 709 when String 710 [sch, table_name] 711 else 712 raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 713 end 714 end
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s'] ds.split_qualifiers(Sequel[:t][:s]) # ['t', 's'] ds.split_qualifiers(Sequel[:d][:t][:s]) # ['d', 't', 's'] ds.split_qualifiers(Sequel.qualify(Sequel[:h][:d], Sequel[:t][:s])) # ['h', 'd', 't', 's']
# File lib/sequel/dataset/sql.rb 722 def split_qualifiers(table_name, *args) 723 case table_name 724 when SQL::QualifiedIdentifier 725 split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) 726 else 727 sch, table = schema_and_table(table_name, *args) 728 sch ? [sch, table] : [table] 729 end 730 end
# File lib/sequel/dataset/sql.rb 733 def subscript_sql_append(sql, s) 734 case s.expression 735 when Symbol, SQL::Subscript, SQL::Identifier, SQL::QualifiedIdentifier 736 # nothing 737 else 738 wrap_expression = true 739 sql << '(' 740 end 741 literal_append(sql, s.expression) 742 if wrap_expression 743 sql << ')[' 744 else 745 sql << '[' 746 end 747 sub = s.sub 748 if sub.length == 1 && (range = sub.first).is_a?(Range) 749 literal_append(sql, range.begin) 750 sql << ':' 751 e = range.end 752 e -= 1 if range.exclude_end? && e.is_a?(Integer) 753 literal_append(sql, e) 754 else 755 expression_list_append(sql, s.sub) 756 end 757 sql << ']' 758 end
Append literalization of windows (for window functions) to SQL
string.
# File lib/sequel/dataset/sql.rb 761 def window_sql_append(sql, opts) 762 raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 763 space = false 764 space_s = ' ' 765 766 sql << '(' 767 768 if window = opts[:window] 769 literal_append(sql, window) 770 space = true 771 end 772 773 if part = opts[:partition] 774 sql << space_s if space 775 sql << "PARTITION BY " 776 expression_list_append(sql, Array(part)) 777 space = true 778 end 779 780 if order = opts[:order] 781 sql << space_s if space 782 sql << "ORDER BY " 783 expression_list_append(sql, Array(order)) 784 space = true 785 end 786 787 if frame = opts[:frame] 788 sql << space_s if space 789 790 if frame.is_a?(String) 791 sql << frame 792 else 793 case frame 794 when :all 795 frame_type = :rows 796 frame_start = :preceding 797 frame_end = :following 798 when :rows, :range, :groups 799 frame_type = frame 800 frame_start = :preceding 801 frame_end = :current 802 when Hash 803 frame_type = frame[:type] 804 unless frame_type == :rows || frame_type == :range || frame_type == :groups 805 raise Error, "invalid window :frame :type option: #{frame_type.inspect}" 806 end 807 unless frame_start = frame[:start] 808 raise Error, "invalid window :frame :start option: #{frame_start.inspect}" 809 end 810 frame_end = frame[:end] 811 frame_exclude = frame[:exclude] 812 else 813 raise Error, "invalid window :frame option: #{frame.inspect}" 814 end 815 816 sql << frame_type.to_s.upcase << " " 817 sql << 'BETWEEN ' if frame_end 818 window_frame_boundary_sql_append(sql, frame_start, :preceding) 819 if frame_end 820 sql << " AND " 821 window_frame_boundary_sql_append(sql, frame_end, :following) 822 end 823 824 if frame_exclude 825 sql << " EXCLUDE " 826 827 case frame_exclude 828 when :current 829 sql << "CURRENT ROW" 830 when :group 831 sql << "GROUP" 832 when :ties 833 sql << "TIES" 834 when :no_others 835 sql << "NO OTHERS" 836 else 837 raise Error, "invalid window :frame :exclude option: #{frame_exclude.inspect}" 838 end 839 end 840 end 841 end 842 843 sql << ')' 844 end
Protected Instance methods
Return a from_self
dataset if an order or limit is specified, so it works as expected with UNION, EXCEPT, and INTERSECT clauses.
# File lib/sequel/dataset/sql.rb 850 def compound_from_self 851 (@opts[:sql] || @opts[:limit] || @opts[:order] || @opts[:offset]) ? from_self : self 852 end