A dataset represents an SQL query. Datasets can be used to select, insert, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].where(author: 'david') # no records are retrieved my_posts.all # records are retrieved my_posts.all # records are retrieved again
Datasets are frozen and use a functional style where modification methods return modified copies of the the dataset. This allows you to reuse datasets:
posts = DB[:posts] davids_posts = posts.where(author: 'david') old_posts = posts.where{stamp < Date.today - 7} davids_old_posts = davids_posts.where{stamp < Date.today - 7}
Datasets are Enumerable objects, so they can be manipulated using many of the Enumerable methods, such as map and inject. Note that there are some methods that Dataset defines that override methods defined in Enumerable and result in different behavior, such as select and group_by.
For more information, see the “Dataset Basics” guide.
Methods
Public Class
Public Instance
- <<
- ==
- []
- add_graph_aliases
- aliased_expression_sql_append
- all
- array_sql_append
- as_hash
- as_set
- avg
- bind
- boolean_constant_sql_append
- call
- case_expression_sql_append
- cast_sql_append
- clone
- column_all_sql_append
- columns
- columns!
- complex_expression_sql_append
- constant_sql_append
- count
- current_datetime
- db
- delayed_evaluation_sql_append
- delete
- distinct
- dup
- each
- each_server
- empty?
- eql?
- escape_like
- except
- exclude
- exclude_having
- exists
- extension
- filter
- first
- first!
- first_source
- first_source_alias
- first_source_table
- for_update
- freeze
- from
- from_self
- function_sql_append
- get
- graph
- grep
- group
- group_and_count
- group_append
- group_by
- group_cube
- group_rollup
- grouping_sets
- hash
- having
- import
- insert
- insert_sql
- inspect
- intersect
- invert
- join
- join_clause_sql_append
- join_on_clause_sql_append
- join_table
- join_using_clause_sql_append
- joined_dataset?
- last
- lateral
- limit
- literal_append
- literal_date_or_time
- lock_style
- map
- max
- merge
- merge_delete
- merge_insert
- merge_sql
- merge_update
- merge_using
- min
- multi_insert
- multi_insert_sql
- naked
- negative_boolean_constant_sql_append
- nowait
- offset
- opts
- or
- order
- order_append
- order_by
- order_more
- order_prepend
- ordered_expression_sql_append
- paged_each
- placeholder_literal_string_sql_append
- placeholder_literalizer_class
- placeholder_literalizer_loader
- prepare
- prepare_sql_type
- provides_accurate_rows_matched?
- qualified_identifier_sql_append
- qualify
- quote_identifier_append
- quote_identifiers?
- quote_schema_table_append
- quoted_identifier_append
- recursive_cte_requires_column_aliases?
- requires_placeholder_type_specifiers?
- requires_sql_standard_datetimes?
- returning
- reverse
- reverse_order
- row_number_column
- row_proc
- schema_and_table
- select
- select_all
- select_append
- select_group
- select_hash
- select_hash_groups
- select_map
- select_more
- select_order_map
- select_prepend
- select_set
- server
- server?
- set_graph_aliases
- single_record
- single_record!
- single_value
- single_value!
- skip_limit_check
- skip_locked
- split_alias
- split_qualifiers
- sql
- subscript_sql_append
- sum
- supports_cte?
- supports_cte_in_subqueries?
- supports_deleting_joins?
- supports_derived_column_lists?
- supports_distinct_on?
- supports_group_cube?
- supports_group_rollup?
- supports_grouping_sets?
- supports_insert_select?
- supports_intersect_except?
- supports_intersect_except_all?
- supports_is_true?
- supports_join_using?
- supports_lateral_subqueries?
- supports_limits_in_correlated_subqueries?
- supports_merge?
- supports_modifying_joins?
- supports_multiple_column_in?
- supports_nowait?
- supports_offsets_in_correlated_subqueries?
- supports_ordered_distinct_on?
- supports_placeholder_literalizer?
- supports_regexp?
- supports_replace?
- supports_returning?
- supports_select_all_and_column?
- supports_skip_locked?
- supports_timestamp_timezones?
- supports_timestamp_usecs?
- supports_updating_joins?
- supports_where_true?
- supports_window_clause?
- supports_window_function_frame_option?
- supports_window_functions?
- to_hash
- to_hash_groups
- truncate
- truncate_sql
- unfiltered
- ungraphed
- ungrouped
- union
- unlimited
- unordered
- unqualified_column_for
- unused_table_alias
- update
- update_sql
- where
- where_all
- where_each
- where_single_value
- window
- window_sql_append
- with
- with_extend
- with_quote_identifiers
- with_recursive
- with_row_proc
- with_sql
- with_sql_all
- with_sql_delete
- with_sql_each
- with_sql_first
- with_sql_insert
- with_sql_single_value
Protected Instance
Contents
- 1 - Methods that return modified datasets
- 2 - Methods that execute code on the database
- 3 - User Methods relating to SQL Creation
- 4 - Methods that describe what the dataset supports
- 5 - Methods related to dataset graphing
- 6 - Miscellaneous methods
- 8 - Methods related to prepared statements or bound variables
- 9 - Internal Methods relating to SQL Creation
Classes and Modules
Constants
| OPTS | = | Sequel::OPTS | ||
| TRUE_FREEZE | = | RUBY_VERSION >= '2.4' |
Whether |
1 - Methods that return modified datasets
Constants
| COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze |
The dataset options that require the removal of cached columns if changed. |
|
| CONDITIONED_JOIN_TYPES | = | [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left].freeze |
These symbols have _join methods created (e.g. inner_join) that call |
|
| EMPTY_ARRAY | = | [].freeze | ||
| EXTENSIONS | = | {} |
Hash of extension name symbols to callable objects to load the extension into the |
|
| JOIN_METHODS | = | ((CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table]).freeze |
All methods that return modified datasets with a joined table added. |
|
| NON_SQL_OPTIONS | = | [:server, :graph, :row_proc, :quote_identifiers, :skip_symbol_cache].freeze |
Which options don’t affect the |
|
| QUERY_METHODS | = | ((<<-METHS).split.map(&:to_sym) + JOIN_METHODS).freeze |
Methods that return modified datasets |
|
| SIMPLE_SELECT_ALL_ALLOWED_FROM | = | [Symbol, SQL::Identifier, SQL::QualifiedIdentifier].freeze |
From types allowed to be considered a simple_select_all |
|
| UNCONDITIONED_JOIN_TYPES | = | [:natural, :natural_left, :natural_right, :natural_full, :cross].freeze |
These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to |
Public Instance Aliases
| _clone | -> | clone |
Save original clone implementation, as some other methods need to call it internally. |
Public Class methods
Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should be a Module that will be included in the dataset’s class. This also registers a Database extension that will extend all of the database’s datasets.
# File lib/sequel/dataset/query.rb 55 def self.register_extension(ext, mod=nil, &block) 56 if mod 57 raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block 58 if mod.is_a?(Module) 59 block = proc{|ds| ds.extend(mod)} 60 Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} 61 Sequel.synchronize{EXTENSION_MODULES[ext] = mod} 62 else 63 block = mod 64 end 65 end 66 67 unless mod.is_a?(Module) 68 Sequel::Deprecation.deprecate("Providing a block or non-module to Sequel::Dataset.register_extension is deprecated and support for it will be removed in Sequel 6.") 69 end 70 71 Sequel.synchronize{EXTENSIONS[ext] = block} 72 end
Public Instance methods
Returns a new clone of the dataset with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.
# File lib/sequel/dataset/query.rb 90 def clone(opts = nil || (return self)) 91 # return self used above because clone is called by almost all 92 # other query methods, and it is the fastest approach 93 c = super(:freeze=>false) 94 c.opts.merge!(opts) 95 unless opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} 96 c.clear_columns_cache 97 end 98 c.freeze 99 end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. If a block is given, it is treated as a virtual row block, similar to where. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id
There is support for emulating the DISTINCT ON support in MySQL, but it does not support the ORDER of the dataset, and also doesn’t work in many cases if the ONLY_FULL_GROUP_BY sql_mode is used, which is the default on MySQL 5.7.5+.
# File lib/sequel/dataset/query.rb 129 def distinct(*args, &block) 130 virtual_row_columns(args, block) 131 if args.empty? 132 return self if opts[:distinct] == EMPTY_ARRAY 133 cached_dataset(:_distinct_ds){clone(:distinct => EMPTY_ARRAY)} 134 else 135 raise(InvalidOperation, "DISTINCT ON not supported") unless supports_distinct_on? 136 clone(:distinct => args.freeze) 137 end 138 end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
| :alias |
Use the given value as the |
| :all |
Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur |
| :from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].except(DB[:other_items]) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1 DB[:items].except(DB[:other_items], all: true, from_self: false) # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items DB[:items].except(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 157 def except(dataset, opts=OPTS) 158 raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 159 raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 160 compound_clone(:except, dataset, opts) 161 end
Performs the inverse of Dataset#where. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(category: 'software') # SELECT * FROM items WHERE (category != 'software') DB[:items].exclude(category: 'software', id: 3) # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
Also note that SQL uses 3-valued boolean logic (true, false, NULL), so the inverse of a true condition is a false condition, and will still not match rows that were NULL originally. If you take the earlier example:
DB[:items].exclude(category: 'software') # SELECT * FROM items WHERE (category != 'software')
Note that this does not match rows where category is NULL. This is because NULL is an unknown value, and you do not know whether or not the NULL category is software. You can explicitly specify how to handle NULL values if you want:
DB[:items].exclude(Sequel.~(category: nil) & {category: 'software'}) # SELECT * FROM items WHERE ((category IS NULL) OR (category != 'software'))
# File lib/sequel/dataset/query.rb 187 def exclude(*cond, &block) 188 add_filter(:where, cond, true, &block) 189 end
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2} # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
# File lib/sequel/dataset/query.rb 198 def exclude_having(*cond, &block) 199 add_filter(:having, cond, true, &block) 200 end
Return a clone of the dataset loaded with the given dataset extensions. If no related extension file exists or the extension does not have specific support for Dataset objects, an error will be raised.
# File lib/sequel/dataset/query.rb 206 def extension(*exts) 207 exts.each{|ext| Sequel.extension(ext) unless Sequel.synchronize{EXTENSIONS[ext]}} 208 mods = exts.map{|ext| Sequel.synchronize{EXTENSION_MODULES[ext]}} 209 if mods.all? 210 with_extend(*mods) 211 else 212 with_extend(DeprecatedSingletonClassMethods).extension(*exts) 213 end 214 end
Alias for where.
# File lib/sequel/dataset/query.rb 226 def filter(*cond, &block) 227 where(*cond, &block) 228 end
Returns a cloned dataset with a :update lock style.
DB[:table].for_update # SELECT * FROM table FOR UPDATE
# File lib/sequel/dataset/query.rb 233 def for_update 234 cached_lock_style_dataset(:_for_update_ds, :update) 235 end
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].from # SQL: SELECT * DB[:items].from(:blah) # SQL: SELECT * FROM blah DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)
# File lib/sequel/dataset/query.rb 246 def from(*source, &block) 247 virtual_row_columns(source, block) 248 table_alias_num = 0 249 ctes = nil 250 source.map! do |s| 251 case s 252 when Dataset 253 if hoist_cte?(s) 254 ctes ||= [] 255 ctes += s.opts[:with] 256 s = s.clone(:with=>nil) 257 end 258 SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 259 when Symbol 260 sch, table, aliaz = split_symbol(s) 261 if aliaz 262 s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) 263 SQL::AliasedExpression.new(s, aliaz.to_sym) 264 else 265 s 266 end 267 else 268 s 269 end 270 end 271 o = {:from=>source.empty? ? nil : source.freeze} 272 o[:with] = ((opts[:with] || EMPTY_ARRAY) + ctes).freeze if ctes 273 o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 274 clone(o) 275 end
Returns a dataset selecting from the current dataset. Options:
| :alias |
Controls the alias of the table |
| :column_aliases |
Also aliases columns, using derived column lists. Only used in conjunction with :alias. |
ds = DB[:items].order(:name).select(:id, :name) # SELECT id,name FROM items ORDER BY name ds.from_self # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1 ds.from_self(alias: :foo) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo ds.from_self(alias: :foo, column_aliases: [:c1, :c2]) # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
# File lib/sequel/dataset/query.rb 294 def from_self(opts=OPTS) 295 fs = {} 296 @opts.keys.each{|k| fs[k] = nil unless non_sql_option?(k)} 297 pr = proc do 298 c = clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self) 299 if cols = _columns 300 c.send(:columns=, cols) 301 end 302 c 303 end 304 305 opts.empty? ? cached_dataset(:_from_self_ds, &pr) : pr.call 306 end
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions if the database supports that. Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
| :all_columns |
All columns must be matched to any of the given patterns. |
| :all_patterns |
All patterns must match at least one of the columns. |
| :case_insensitive |
Use a case insensitive pattern match (the default is case sensitive if the database supports it). |
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%') # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\') dataset.grep([:a, :b], %w'%test% foo') # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\') # OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\')) dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\')) # AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', all_columns: true) # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\')) # AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\'))) dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true, all_columns: true) # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\') # AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
# File lib/sequel/dataset/query.rb 343 def grep(columns, patterns, opts=OPTS) 344 column_op = opts[:all_columns] ? :AND : :OR 345 if opts[:all_patterns] 346 conds = Array(patterns).map do |pat| 347 SQL::BooleanExpression.new(column_op, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) 348 end 349 where(SQL::BooleanExpression.new(:AND, *conds)) 350 else 351 conds = Array(columns).map do |c| 352 SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) 353 end 354 where(SQL::BooleanExpression.new(column_op, *conds)) 355 end 356 end
Returns a copy of the dataset with the results grouped by the value of the given columns. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
# File lib/sequel/dataset/query.rb 365 def group(*columns, &block) 366 virtual_row_columns(columns, block) 367 clone(:group => (columns.compact.empty? ? nil : columns.freeze)) 368 end
Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause. If a block is given, it is treated as a virtual row block, similar to where.
Examples:
DB[:items].group_and_count(:name).all # SELECT name, count(*) AS count FROM items GROUP BY name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count(:first_name, :last_name).all # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] DB[:items].group_and_count(Sequel[:first_name].as(:name)).all # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name # => [{:name=>'a', :count=>1}, ...] DB[:items].group_and_count{substr(:first_name, 1, 1).as(:initial)}.all # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1) # => [{:initial=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb 396 def group_and_count(*columns, &block) 397 select_group(*columns, &block).select_append(COUNT_OF_ALL_AS_COUNT) 398 end
Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.
DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b
# File lib/sequel/dataset/query.rb 406 def group_append(*columns, &block) 407 columns = @opts[:group] + columns if @opts[:group] 408 group(*columns, &block) 409 end
Alias of group
# File lib/sequel/dataset/query.rb 371 def group_by(*columns, &block) 372 group(*columns, &block) 373 end
Adds the appropriate CUBE syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 412 def group_cube 413 raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? 414 clone(:group_options=>:cube) 415 end
Adds the appropriate ROLLUP syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 418 def group_rollup 419 raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? 420 clone(:group_options=>:rollup) 421 end
Adds the appropriate GROUPING SETS syntax to GROUP BY.
# File lib/sequel/dataset/query.rb 424 def grouping_sets 425 raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets? 426 clone(:group_options=>:"grouping sets") 427 end
Returns a copy of the dataset with the HAVING conditions changed. See where for argument types.
DB[:items].group(:sum).having(sum: 10) # SELECT * FROM items GROUP BY sum HAVING (sum = 10)
# File lib/sequel/dataset/query.rb 433 def having(*cond, &block) 434 add_filter(:having, cond, &block) 435 end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
| :alias |
Use the given value as the |
| :all |
Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur |
| :from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].intersect(DB[:other_items]) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1 DB[:items].intersect(DB[:other_items], all: true, from_self: false) # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items DB[:items].intersect(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 454 def intersect(dataset, opts=OPTS) 455 raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 456 raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 457 compound_clone(:intersect, dataset, opts) 458 end
Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.
DB[:items].where(category: 'software').invert # SELECT * FROM items WHERE (category != 'software') DB[:items].where(category: 'software', id: 3).invert # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
# File lib/sequel/dataset/query.rb 471 def invert 472 cached_dataset(:_invert_ds) do 473 having, where = @opts.values_at(:having, :where) 474 if having.nil? && where.nil? 475 where(false) 476 else 477 o = {} 478 o[:having] = SQL::BooleanExpression.invert(having) if having 479 o[:where] = SQL::BooleanExpression.invert(where) if where 480 clone(o) 481 end 482 end 483 end
Alias of inner_join
# File lib/sequel/dataset/query.rb 486 def join(*args, &block) 487 inner_join(*args, &block) 488 end
Returns a joined dataset. Not usually called directly, users should use the appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills in the type argument.
Takes the following arguments:
| type |
The type of join to do (e.g. :inner) | ||||||||||
| table |
table to join into the current dataset. Generally one of the following types:
| ||||||||||
| expr |
conditions used when joining, depends on type:
| ||||||||||
| options |
a hash of options, with the following keys supported:
| ||||||||||
| block |
The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the table alias/name for the table currently being joined, the table alias/name for the last joined (or first table), and an array of previous |
Examples:
DB[:a].join_table(:cross, :b) # SELECT * FROM a CROSS JOIN b DB[:a].join_table(:inner, DB[:b], c: d) # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d) DB[:a].join_table(:left, Sequel[:b].as(:c), [:d]) # SELECT * FROM a LEFT JOIN b AS c USING (d) DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js| (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)} end # SELECT * FROM a NATURAL JOIN b INNER JOIN c # ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
# File lib/sequel/dataset/query.rb 550 def join_table(type, table, expr=nil, options=OPTS, &block) 551 if hoist_cte?(table) 552 s, ds = hoist_cte(table) 553 return s.join_table(type, ds, expr, options, &block) 554 end 555 556 using_join = options[:join_using] || (expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)}) 557 if using_join && !supports_join_using? 558 h = {} 559 expr.each{|e| h[e] = e} 560 return join_table(type, table, h, options) 561 end 562 563 table_alias = options[:table_alias] 564 565 if table.is_a?(SQL::AliasedExpression) 566 table_expr = if table_alias 567 SQL::AliasedExpression.new(table.expression, table_alias, table.columns) 568 else 569 table 570 end 571 table = table_expr.expression 572 table_name = table_alias = table_expr.alias 573 elsif table.is_a?(Dataset) 574 if table_alias.nil? 575 table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 576 table_alias = dataset_alias(table_alias_num) 577 end 578 table_name = table_alias 579 table_expr = SQL::AliasedExpression.new(table, table_alias) 580 else 581 table, implicit_table_alias = split_alias(table) 582 table_alias ||= implicit_table_alias 583 table_name = table_alias || table 584 table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table 585 end 586 587 join = if expr.nil? and !block 588 SQL::JoinClause.new(type, table_expr) 589 elsif using_join 590 raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block 591 SQL::JoinUsingClause.new(expr, type, table_expr) 592 else 593 last_alias = options[:implicit_qualifier] || @opts[:last_joined_table] || first_source_alias 594 qualify_type = options[:qualify] 595 if Sequel.condition_specifier?(expr) 596 expr = expr.map do |k, v| 597 qualify_type = default_join_table_qualification if qualify_type.nil? 598 case qualify_type 599 when false 600 nil # Do no qualification 601 when :deep 602 k = Sequel::Qualifier.new(table_name).transform(k) 603 v = Sequel::Qualifier.new(last_alias).transform(v) 604 else 605 k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 606 v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 607 end 608 [k,v] 609 end 610 expr = SQL::BooleanExpression.from_value_pairs(expr) 611 end 612 if block 613 expr2 = yield(table_name, last_alias, @opts[:join] || EMPTY_ARRAY) 614 expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 615 end 616 SQL::JoinOnClause.new(expr, type, table_expr) 617 end 618 619 opts = {:join => ((@opts[:join] || EMPTY_ARRAY) + [join]).freeze} 620 opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false 621 opts[:num_dataset_sources] = table_alias_num if table_alias_num 622 clone(opts) 623 end
Marks this dataset as a lateral dataset. If used in another dataset’s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:
DB.from(:a, DB[:b].where(Sequel[:a][:c]=>Sequel[:b][:d]).lateral) # SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))
# File lib/sequel/dataset/query.rb 644 def lateral 645 return self if opts[:lateral] 646 cached_dataset(:_lateral_ds){clone(:lateral=>true)} 647 end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10 DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20 DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10 DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10 DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
# File lib/sequel/dataset/query.rb 659 def limit(l, o = (no_offset = true; nil)) 660 return from_self.limit(l, o) if @opts[:sql] 661 662 if l.is_a?(Range) 663 no_offset = false 664 o = l.first 665 l = l.last - l.first + (l.exclude_end? ? 0 : 1) 666 end 667 l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) 668 if l.is_a?(Integer) 669 raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 670 end 671 672 ds = clone(:limit=>l) 673 ds = ds.offset(o) unless no_offset 674 ds 675 end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL injection.
A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. for_update).
DB[:items].lock_style('FOR SHARE NOWAIT') # SELECT * FROM items FOR SHARE NOWAIT DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED') # SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED
# File lib/sequel/dataset/query.rb 689 def lock_style(style) 690 clone(:lock => style) 691 end
Return a dataset with a WHEN MATCHED THEN DELETE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_delete # WHEN MATCHED THEN DELETE merge_delete{a > 30} # WHEN MATCHED AND (a > 30) THEN DELETE
# File lib/sequel/dataset/query.rb 702 def merge_delete(&block) 703 _merge_when(:type=>:delete, &block) 704 end
Return a dataset with a WHEN NOT MATCHED THEN INSERT clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
The arguments provided can be any arguments that would be accepted by insert.
merge_insert(i1: :i2, a: Sequel[:b]+11) # WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11)) merge_insert(:i2, Sequel[:b]+11){a > 30} # WHEN NOT MATCHED AND (a > 30) THEN INSERT VALUES (i2, (b + 11))
# File lib/sequel/dataset/query.rb 718 def merge_insert(*values, &block) 719 _merge_when(:type=>:insert, :values=>values, &block) 720 end
Return a dataset with a WHEN MATCHED THEN UPDATE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20) # WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20) merge_update(i1: :i2){a > 30} # WHEN MATCHED AND (a > 30) THEN UPDATE SET i1 = i2
# File lib/sequel/dataset/query.rb 731 def merge_update(values, &block) 732 _merge_when(:type=>:update, :values=>values, &block) 733 end
Return a dataset with the source and join condition to use for the MERGE statement.
merge_using(:m2, i1: :i2) # USING m2 ON (i1 = i2)
# File lib/sequel/dataset/query.rb 739 def merge_using(source, join_condition) 740 clone(:merge_using => [source, join_condition].freeze) 741 end
Returns a cloned dataset without a row_proc.
ds = DB[:items].with_row_proc(:invert.to_proc) ds.all # => [{2=>:id}] ds.naked.all # => [{:id=>2}]
# File lib/sequel/dataset/query.rb 748 def naked 749 return self unless opts[:row_proc] 750 cached_dataset(:_naked_ds){with_row_proc(nil)} 751 end
Returns a copy of the dataset that will raise a DatabaseLockTimeout instead of waiting for rows that are locked by another transaction
DB[:items].for_update.nowait # SELECT * FROM items FOR UPDATE NOWAIT
# File lib/sequel/dataset/query.rb 758 def nowait 759 return self if opts[:nowait] 760 cached_dataset(:_nowait_ds) do 761 raise(Error, 'This dataset does not support raises errors instead of waiting for locked rows') unless supports_nowait? 762 clone(:nowait=>true) 763 end 764 end
Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override the offset if you’ve called offset first.
DB[:items].offset(10) # SELECT * FROM items OFFSET 10
# File lib/sequel/dataset/query.rb 771 def offset(o) 772 o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) 773 if o.is_a?(Integer) 774 raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 775 end 776 clone(:offset => o) 777 end
Adds an alternate filter to an existing WHERE clause using OR. If there is no WHERE clause, then the default is WHERE true, and OR would be redundant, so return the dataset in that case.
DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b DB[:items].or(:b) # SELECT * FROM items
# File lib/sequel/dataset/query.rb 785 def or(*cond, &block) 786 if @opts[:where].nil? 787 self 788 else 789 add_filter(:where, cond, false, :OR, &block) 790 end 791 end
Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL functions. If a block is given, it is treated as a virtual row block, similar to where.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b DB[:items].order(Sequel[:a] + :b) # SELECT * FROM items ORDER BY (a + b) DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC DB[:items].order(Sequel.asc(:name, nulls: :last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC DB[:items].order(nil) # SELECT * FROM items
# File lib/sequel/dataset/query.rb 807 def order(*columns, &block) 808 virtual_row_columns(columns, block) 809 clone(:order => (columns.compact.empty?) ? nil : columns.freeze) 810 end
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_append(:b) # SELECT * FROM items ORDER BY a, b
# File lib/sequel/dataset/query.rb 817 def order_append(*columns, &block) 818 columns = @opts[:order] + columns if @opts[:order] 819 order(*columns, &block) 820 end
Alias of order
# File lib/sequel/dataset/query.rb 823 def order_by(*columns, &block) 824 order(*columns, &block) 825 end
Alias of order_append.
# File lib/sequel/dataset/query.rb 828 def order_more(*columns, &block) 829 order_append(*columns, &block) 830 end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
# File lib/sequel/dataset/query.rb 837 def order_prepend(*columns, &block) 838 ds = order(*columns, &block) 839 @opts[:order] ? ds.order_append(*@opts[:order]) : ds 840 end
Qualify to the given table, or first source if no table is given.
DB[:items].where(id: 1).qualify # SELECT items.* FROM items WHERE (items.id = 1) DB[:items].where(id: 1).qualify(:i) # SELECT i.* FROM items WHERE (i.id = 1)
# File lib/sequel/dataset/query.rb 849 def qualify(table=(cache=true; first_source)) 850 o = @opts 851 return self if o[:sql] 852 853 pr = proc do 854 h = {} 855 (o.keys & QUALIFY_KEYS).each do |k| 856 h[k] = qualified_expression(o[k], table) 857 end 858 h[:select] = [SQL::ColumnAll.new(table)].freeze if !o[:select] || o[:select].empty? 859 clone(h) 860 end 861 862 cache ? cached_dataset(:_qualify_ds, &pr) : pr.call 863 end
Modify the RETURNING clause, only supported on a few databases. If returning is used, instead of insert returning the autogenerated primary key or update/delete returning the number of modified rows, results are returned using fetch_rows.
DB[:items].returning # RETURNING * DB[:items].returning(nil) # RETURNING NULL DB[:items].returning(:id, :name) # RETURNING id, name DB[:items].returning.insert(a: 1) do |hash| # hash for each row inserted, with values for all columns end DB[:items].returning.update(a: 1) do |hash| # hash for each row updated, with values for all columns end DB[:items].returning.delete(a: 1) do |hash| # hash for each row deleted, with values for all columns end
# File lib/sequel/dataset/query.rb 883 def returning(*values) 884 if values.empty? 885 return self if opts[:returning] == EMPTY_ARRAY 886 cached_dataset(:_returning_ds) do 887 raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) 888 clone(:returning=>EMPTY_ARRAY) 889 end 890 else 891 raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) 892 clone(:returning=>values.freeze) 893 end 894 end
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
# File lib/sequel/dataset/query.rb 903 def reverse(*order, &block) 904 if order.empty? && !block 905 cached_dataset(:_reverse_ds){order(*invert_order(@opts[:order]))} 906 else 907 virtual_row_columns(order, block) 908 order(*invert_order(order.empty? ? @opts[:order] : order.freeze)) 909 end 910 end
Alias of reverse
# File lib/sequel/dataset/query.rb 913 def reverse_order(*order, &block) 914 reverse(*order, &block) 915 end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to where.
DB[:items].select(:a) # SELECT a FROM items DB[:items].select(:a, :b) # SELECT a, b FROM items DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb 924 def select(*columns, &block) 925 virtual_row_columns(columns, block) 926 clone(:select => columns.freeze) 927 end
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items DB[:items].select_all(:items) # SELECT items.* FROM items DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
# File lib/sequel/dataset/query.rb 936 def select_all(*tables) 937 if tables.empty? 938 return self unless opts[:select] 939 cached_dataset(:_select_all_ds){clone(:select => nil)} 940 else 941 select(*tables.map{|t| i, a = split_alias(t); a || i}.map!{|t| SQL::ColumnAll.new(t)}.freeze) 942 end 943 end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items DB[:items].select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb 952 def select_append(*columns, &block) 953 virtual_row_columns(columns, block) 954 select(*(_current_select(true) + columns)) 955 end
Set both the select and group clauses with the given columns. Column aliases may be supplied, and will be included in the select clause. This also takes a virtual row block similar to where.
DB[:items].select_group(:a, :b) # SELECT a, b FROM items GROUP BY a, b DB[:items].select_group(Sequel[:c].as(:a)){f(c2)} # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
# File lib/sequel/dataset/query.rb 966 def select_group(*columns, &block) 967 virtual_row_columns(columns, block) 968 select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) 969 end
Alias for select_append.
# File lib/sequel/dataset/query.rb 972 def select_more(*columns, &block) 973 select_append(*columns, &block) 974 end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items DB[:items].select(:a).select_prepend(:b) # SELECT b, a FROM items DB[:items].select_prepend(:b) # SELECT b, * FROM items
# File lib/sequel/dataset/query.rb 983 def select_prepend(*columns, &block) 984 virtual_row_columns(columns, block) 985 select(*(columns + _current_select(false))) 986 end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server DB[:items].delete # Uses the :default server DB[:items].server(:blah).delete # Uses the :blah server
# File lib/sequel/dataset/query.rb 997 def server(servr) 998 clone(:server=>servr) 999 end
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.
# File lib/sequel/dataset/query.rb 1004 def server?(server) 1005 if db.sharded? && !opts[:server] 1006 server(server) 1007 else 1008 self 1009 end 1010 end
Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.
# File lib/sequel/dataset/query.rb 1013 def skip_limit_check 1014 return self if opts[:skip_limit_check] 1015 cached_dataset(:_skip_limit_check_ds) do 1016 clone(:skip_limit_check=>true) 1017 end 1018 end
Skip locked rows when returning results from this dataset.
# File lib/sequel/dataset/query.rb 1021 def skip_locked 1022 return self if opts[:skip_locked] 1023 cached_dataset(:_skip_locked_ds) do 1024 raise(Error, 'This dataset does not support skipping locked rows') unless supports_skip_locked? 1025 clone(:skip_locked=>true) 1026 end 1027 end
Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).unfiltered # SELECT * FROM items GROUP BY a
# File lib/sequel/dataset/query.rb 1033 def unfiltered 1034 return self unless opts[:where] || opts[:having] 1035 cached_dataset(:_unfiltered_ds){clone(:where => nil, :having => nil)} 1036 end
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).ungrouped # SELECT * FROM items WHERE b
# File lib/sequel/dataset/query.rb 1042 def ungrouped 1043 return self unless opts[:group] || opts[:having] 1044 cached_dataset(:_ungrouped_ds){clone(:group => nil, :having => nil)} 1045 end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
| :alias |
Use the given value as the |
| :all |
Set to true to use UNION ALL instead of UNION, so duplicate rows can occur |
| :from_self |
Set to false to not wrap the returned dataset in a |
DB[:items].union(DB[:other_items]) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1 DB[:items].union(DB[:other_items], all: true, from_self: false) # SELECT * FROM items UNION ALL SELECT * FROM other_items DB[:items].union(DB[:other_items], alias: :i) # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
# File lib/sequel/dataset/query.rb 1063 def union(dataset, opts=OPTS) 1064 compound_clone(:union, dataset, opts) 1065 end
Returns a copy of the dataset with no limit or offset.
DB[:items].limit(10, 20).unlimited # SELECT * FROM items
# File lib/sequel/dataset/query.rb 1070 def unlimited 1071 return self unless opts[:limit] || opts[:offset] 1072 cached_dataset(:_unlimited_ds){clone(:limit=>nil, :offset=>nil)} 1073 end
Returns a copy of the dataset with no order.
DB[:items].order(:a).unordered # SELECT * FROM items
# File lib/sequel/dataset/query.rb 1078 def unordered 1079 return self unless opts[:order] 1080 cached_dataset(:_unordered_ds){clone(:order=>nil)} 1081 end
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
Accepts the following argument types:
| Hash, Array of pairs |
list of equality/inclusion expressions |
| Symbol |
taken as a boolean column argument (e.g. WHERE active) |
Sequel::SQL::BooleanExpression, Sequel::LiteralString |
an existing condition expression, probably created using the |
where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the “Virtual Rows” guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].where(id: 3) # SELECT * FROM items WHERE (id = 3) DB[:items].where(Sequel.lit('price < ?', 100)) # SELECT * FROM items WHERE price < 100 DB[:items].where([[:id, [1,2,3]], [:id, 0..10]]) # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10))) DB[:items].where(Sequel.lit('price < 100')) # SELECT * FROM items WHERE price < 100 DB[:items].where(:active) # SELECT * FROM items WHERE :active DB[:items].where{price < 100} # SELECT * FROM items WHERE (price < 100)
Multiple where calls can be chained for scoping:
software = dataset.where(category: 'software').where{price < 100} # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the “Dataset Filtering” guide for more examples and details.
# File lib/sequel/dataset/query.rb 1125 def where(*cond, &block) 1126 add_filter(:where, cond, &block) 1127 end
Return a clone of the dataset with an addition named window that can be referenced in window functions. See Sequel::SQL::Window for a list of options that can be passed in. Example:
DB[:items].window(:w, partition: :c1, order: :c2) # SELECT * FROM items WINDOW w AS (PARTITION BY c1 ORDER BY c2)
# File lib/sequel/dataset/query.rb 1135 def window(name, opts) 1136 clone(:window=>((@opts[:window]||EMPTY_ARRAY) + [[name, SQL::Window.new(opts)].freeze]).freeze) 1137 end
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query.
Options:
| :args |
Specify the arguments/columns for the CTE, should be an array of symbols. |
| :recursive |
Specify that this is a recursive CTE |
| :materialized |
Set to false to force inlining of the CTE, or true to force not inlining the CTE (PostgreSQL 12+/SQLite 3.35+). |
DB[:items].with(:items, DB[:syx].where(Sequel[:name].like('A%'))) # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
# File lib/sequel/dataset/query.rb 1150 def with(name, dataset, opts=OPTS) 1151 raise(Error, 'This dataset does not support common table expressions') unless supports_cte? 1152 if hoist_cte?(dataset) 1153 s, ds = hoist_cte(dataset) 1154 s.with(name, ds, opts) 1155 else 1156 clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]).freeze) 1157 end 1158 end
Create a subclass of the receiver’s class, and include the given modules into it. If a block is provided, a DatasetModule is created using the block and is included into the subclass. Create an instance of the subclass using the same db and opts, so that the returned dataset operates similarly to a clone extended with the given modules. This approach is used to avoid singleton classes, which significantly improves performance.
Note that like Object#extend, when multiple modules are provided as arguments the subclass includes the modules in reverse order.
# File lib/sequel/dataset/query.rb 1239 def with_extend(*mods, &block) 1240 c = Sequel.set_temp_name(Class.new(self.class)){"Sequel::Dataset::_Subclass"} 1241 c.include(*mods) unless mods.empty? 1242 c.include(Sequel.set_temp_name(DatasetModule.new(&block)){"Sequel::Dataset::_DatasetModule(#{block.source_location[0,2].join(':')})"}) if block 1243 o = c.freeze.allocate 1244 o.instance_variable_set(:@db, @db) 1245 o.instance_variable_set(:@opts, @opts) 1246 o.instance_variable_set(:@cache, {}) 1247 if cols = cache_get(:_columns) 1248 o.send(:columns=, cols) 1249 end 1250 o.freeze 1251 end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE.
Options:
| :args |
Specify the arguments/columns for the CTE, should be an array of symbols. |
| :union_all |
Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts. |
PostgreSQL 14+ Options:
| :cycle |
Stop recursive searching when a cycle is detected. Includes two columns in the result of the CTE, a cycle column indicating whether a cycle was detected for the current row, and a path column for the path traversed to get to the current row. If given, must be a hash with the following keys:
| ||||||||||
| :search |
Include an order column in the result of the CTE that allows for breadth or depth first searching. If given, must be a hash with the following keys:
|
DB[:t].with_recursive(:t, DB[:i1].select(:id, :parent_id).where(parent_id: nil), DB[:i1].join(:t, id: :parent_id).select(Sequel[:i1][:id], Sequel[:i1][:parent_id]), args: [:id, :parent_id]) # WITH RECURSIVE t(id, parent_id) AS ( # SELECT id, parent_id FROM i1 WHERE (parent_id IS NULL) # UNION ALL # SELECT i1.id, i1.parent_id FROM i1 INNER JOIN t ON (t.id = i1.parent_id) # ) SELECT * FROM t DB[:t].with_recursive(:t, DB[:i1].where(parent_id: nil), DB[:i1].join(:t, id: :parent_id).select_all(:i1), search: {by: :id, type: :breadth}, cycle: {columns: :id, cycle_value: 1, noncycle_value: 2}) # WITH RECURSIVE t AS ( # SELECT * FROM i1 WHERE (parent_id IS NULL) # UNION ALL # (SELECT i1.* FROM i1 INNER JOIN t ON (t.id = i1.parent_id)) # ) # SEARCH BREADTH FIRST BY id SET ordercol # CYCLE id SET is_cycle TO 1 DEFAULT 2 USING path # SELECT * FROM t
# File lib/sequel/dataset/query.rb 1216 def with_recursive(name, nonrecursive, recursive, opts=OPTS) 1217 raise(Error, 'This dataset does not support common table expressions') unless supports_cte? 1218 if hoist_cte?(nonrecursive) 1219 s, ds = hoist_cte(nonrecursive) 1220 s.with_recursive(name, ds, recursive, opts) 1221 elsif hoist_cte?(recursive) 1222 s, ds = hoist_cte(recursive) 1223 s.with_recursive(name, nonrecursive, ds, opts) 1224 else 1225 clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]).freeze) 1226 end 1227 end
Returns a cloned dataset with the given row_proc.
ds = DB[:items] ds.all # => [{:id=>2}] ds.with_row_proc(:invert.to_proc).all # => [{2=>:id}]
# File lib/sequel/dataset/query.rb 1268 def with_row_proc(callable) 1269 clone(:row_proc=>callable) 1270 end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, b: 1) # INSERT INTO items (b) VALUES (1)
Note that datasets that specify custom SQL using this method will generally ignore future dataset methods that modify the SQL used, as specifying custom SQL overrides Sequel’s SQL generator. You should probably limit yourself to the following dataset methods when using this method, or use the implicit_subquery extension:
-
each
-
all
-
single_record(if only one record could be returned) -
single_value(if only one record could be returned, and a single column is selected) -
map
-
delete (if a DELETE statement)
-
update (if an UPDATE statement, with no arguments)
-
insert (if an INSERT statement, with no arguments)
-
truncate (if a TRUNCATE statement, with no arguments)
# File lib/sequel/dataset/query.rb 1302 def with_sql(sql, *args) 1303 if sql.is_a?(Symbol) 1304 sql = public_send(sql, *args) 1305 else 1306 sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 1307 end 1308 clone(:sql=>sql) 1309 end
Protected Instance methods
Add the dataset to the list of compounds
# File lib/sequel/dataset/query.rb 1314 def compound_clone(type, dataset, opts) 1315 if dataset.is_a?(Dataset) && dataset.opts[:with] && !supports_cte_in_compounds? 1316 s, ds = hoist_cte(dataset) 1317 return s.compound_clone(type, ds, opts) 1318 end 1319 ds = compound_from_self.clone(:compounds=>(Array(@opts[:compounds]).map(&:dup) + [[type, dataset.compound_from_self, opts[:all]].freeze]).freeze) 1320 opts[:from_self] == false ? ds : ds.from_self(opts) 1321 end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb 1324 def options_overlap(opts) 1325 !(@opts.map{|k,v| k unless v.nil?}.compact & opts).empty? 1326 end
Whether this dataset is a simple select from an underlying table, such as:
SELECT * FROM table SELECT table.* FROM table
# File lib/sequel/dataset/query.rb 1335 def simple_select_all? 1336 return false unless (f = @opts[:from]) && f.length == 1 1337 o = @opts.reject{|k,v| v.nil? || non_sql_option?(k)} 1338 from = f.first 1339 from = from.expression if from.is_a?(SQL::AliasedExpression) 1340 1341 if SIMPLE_SELECT_ALL_ALLOWED_FROM.any?{|x| from.is_a?(x)} 1342 case o.length 1343 when 1 1344 true 1345 when 2 1346 (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll) 1347 else 1348 false 1349 end 1350 else 1351 false 1352 end 1353 end
2 - Methods that execute code on the database
Constants
| ACTION_METHODS | = | (<<-METHS).split.map(&:to_sym).freeze |
Action methods defined by |
|
| COLUMNS_CLONE_OPTIONS | = | {:distinct => nil, :limit => 0, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze |
The clone options to use when retrieving columns for a dataset. |
|
| COUNT_SELECT | = | Sequel.function(:count).*.as(:count) | ||
| EMPTY_SELECT | = | Sequel::SQL::AliasedExpression.new(1, :one) |
Public Instance Aliases
| with_sql_update | -> | with_sql_delete |
Public Instance methods
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {id: 0, name: 'Zero'} << DB[:old_items].select(:id, name)
# File lib/sequel/dataset/actions.rb 29 def <<(arg) 30 insert(arg) 31 self 32 end
Returns the first record matching the conditions. Examples:
DB[:table][id: 1] # SELECT * FROM table WHERE (id = 1) LIMIT 1 # => {:id=>1}
# File lib/sequel/dataset/actions.rb 38 def [](*conditions) 39 raise(Error, 'You cannot call Dataset#[] with an integer or with no arguments') if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 40 first(*conditions) 41 end
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table # => [{:id=>1, ...}, {:id=>2, ...}, ...] # Iterate over all rows in the table DB[:table].all{|row| p row}
# File lib/sequel/dataset/actions.rb 51 def all(&block) 52 _all(block){|a| each{|r| a << r}} 53 end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].as_hash(:id, :name) # SELECT * FROM table # {1=>'Jim', 2=>'Bob', ...} DB[:table].as_hash(:id) # SELECT * FROM table # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].as_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...} DB[:table].as_hash([:id, :name]) # SELECT * FROM table # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob']=>{:id=>2, :name=>'Bob'}, ...}
Options:
| :all |
Use all instead of each to retrieve the objects |
| :hash |
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
# File lib/sequel/dataset/actions.rb 901 def as_hash(key_column, value_column = nil, opts = OPTS) 902 h = opts[:hash] || {} 903 meth = opts[:all] ? :all : :each 904 if value_column 905 return naked.as_hash(key_column, value_column, opts) if row_proc 906 if value_column.is_a?(Array) 907 if key_column.is_a?(Array) 908 public_send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} 909 else 910 public_send(meth){|r| h[r[key_column]] = r.values_at(*value_column)} 911 end 912 else 913 if key_column.is_a?(Array) 914 public_send(meth){|r| h[r.values_at(*key_column)] = r[value_column]} 915 else 916 public_send(meth){|r| h[r[key_column]] = r[value_column]} 917 end 918 end 919 elsif key_column.is_a?(Array) 920 public_send(meth){|r| h[key_column.map{|k| r[k]}] = r} 921 else 922 public_send(meth){|r| h[r[key_column]] = r} 923 end 924 h 925 end
Returns sets for column values for each record in the dataset.
DB[:table].as_set(:id) # SELECT * FROM table # => Set[1, 2, 3, ...]
You can also provide an array of column names, in which case the elements of the returned set are arrays (not sets):
DB[:table].as_set([:id, :name]) # SELECT * FROM table # => Set[[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb 65 def as_set(column) 66 return naked.as_set(column) if row_proc 67 68 if column.is_a?(Array) 69 to_set{|r| r.values_at(*column)} 70 else 71 to_set{|r| r[column]} 72 end 73 end
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1 # => 3 DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb 82 def avg(arg=(no_arg = true), &block) 83 arg = Sequel.virtual_row(&block) if no_arg 84 _aggregate(:avg, arg) 85 end
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema.
DB[:table].columns # => [:id, :name]
# File lib/sequel/dataset/actions.rb 96 def columns 97 _columns || columns! 98 end
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
DB[:table].columns! # => [:id, :name]
# File lib/sequel/dataset/actions.rb 105 def columns! 106 ds = clone(COLUMNS_CLONE_OPTIONS) 107 ds.each{break} 108 109 if cols = ds.cache[:_columns] 110 self.columns = cols 111 else 112 [] 113 end 114 end
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1 # => 3 DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1 # => 2 DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1 # => 1
# File lib/sequel/dataset/actions.rb 129 def count(arg=(no_arg=true), &block) 130 if no_arg && !block 131 cached_dataset(:_count_ds) do 132 aggregate_dataset.select(COUNT_SELECT).single_value_ds 133 end.single_value!.to_i 134 else 135 if block 136 if no_arg 137 arg = Sequel.virtual_row(&block) 138 else 139 raise Error, 'cannot provide both argument and block to Dataset#count' 140 end 141 end 142 143 _aggregate(:count, arg) 144 end 145 end
Deletes the records in the dataset, returning the number of records deleted.
DB[:table].delete # DELETE * FROM table # => 3
Some databases support using multiple tables in a DELETE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>c[:h]}}. delete # DELETE FROM a # USING b # INNER JOIN c ON (c.d = b.e) # WHERE ((a.f = b.g) AND (a.id = c.h))
# File lib/sequel/dataset/actions.rb 163 def delete(&block) 164 sql = delete_sql 165 if uses_returning?(:delete) 166 returning_fetch_rows(sql, &block) 167 else 168 execute_dui(sql) 169 end 170 end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all instead of each for the outer queries, or use a separate thread or shard inside each.
# File lib/sequel/dataset/actions.rb 181 def each 182 if rp = row_proc 183 fetch_rows(select_sql){|r| yield rp.call(r)} 184 else 185 fetch_rows(select_sql){|r| yield r} 186 end 187 self 188 end
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1 # => false
# File lib/sequel/dataset/actions.rb 196 def empty? 197 cached_dataset(:_empty_ds) do 198 (@opts[:sql] ? from_self : self).single_value_ds.unordered.select(EMPTY_SELECT) 199 end.single_value!.nil? 200 end
Returns the first matching record if no arguments are given. If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If any other type of argument(s) is passed, it is treated as a filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.
If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).
Examples:
DB[:table].first # SELECT * FROM table LIMIT 1 # => {:id=>7} DB[:table].first(2) # SELECT * FROM table LIMIT 2 # => [{:id=>6}, {:id=>4}] DB[:table].first(id: 2) # SELECT * FROM table WHERE (id = 2) LIMIT 1 # => {:id=>2} DB[:table].first(Sequel.lit("id = 3")) # SELECT * FROM table WHERE (id = 3) LIMIT 1 # => {:id=>3} DB[:table].first(Sequel.lit("id = ?", 4)) # SELECT * FROM table WHERE (id = 4) LIMIT 1 # => {:id=>4} DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1 # => {:id=>5} DB[:table].first(Sequel.lit("id > ?", 4)){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1 # => {:id=>5} DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2 # => [{:id=>1}]
# File lib/sequel/dataset/actions.rb 237 def first(*args, &block) 238 case args.length 239 when 0 240 unless block 241 return(@opts[:sql] ? single_record! : single_record) 242 end 243 when 1 244 arg = args[0] 245 if arg.is_a?(Integer) 246 res = if block 247 if loader = cached_placeholder_literalizer(:_first_integer_cond_loader) do |pl| 248 where(pl.arg).limit(pl.arg) 249 end 250 251 loader.all(filter_expr(&block), arg) 252 else 253 where(&block).limit(arg).all 254 end 255 else 256 if loader = cached_placeholder_literalizer(:_first_integer_loader) do |pl| 257 limit(pl.arg) 258 end 259 260 loader.all(arg) 261 else 262 limit(arg).all 263 end 264 end 265 266 return res 267 end 268 where_args = args 269 args = arg 270 end 271 272 if loader = cached_where_placeholder_literalizer(where_args||args, block, :_first_cond_loader) do |pl| 273 _single_record_ds.where(pl.arg) 274 end 275 276 loader.first(filter_expr(args, &block)) 277 else 278 _single_record_ds.where(args, &block).single_record! 279 end 280 end
Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow exception.
# File lib/sequel/dataset/actions.rb 284 def first!(*args, &block) 285 first(*args, &block) || raise(Sequel::NoMatchingRow.new(self)) 286 end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1 # => 3 ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1 # => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1 # => [3, 'foo'] DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1 # => [6, 'foo']
If called on a dataset with raw SQL, returns the first value in the dataset without changing the selection or setting a limit:
DB["SELECT id FROM table"].get # SELECT id FROM table # => 3
# File lib/sequel/dataset/actions.rb 312 def get(column=(no_arg=true; nil), &block) 313 ds = naked 314 if block 315 raise(Error, 'Must call Dataset#get with an argument or a block, not both') unless no_arg 316 ds = ds.select(&block) 317 column = ds.opts[:select] 318 column = nil if column.is_a?(Array) && column.length < 2 319 elsif no_arg && opts[:sql] 320 return ds.single_value! 321 else 322 case column 323 when Array 324 ds = ds.select(*column) 325 when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression 326 if loader = cached_placeholder_literalizer(:_get_loader) do |pl| 327 ds.single_value_ds.select(pl.arg) 328 end 329 330 return loader.get(column) 331 end 332 333 ds = ds.select(column) 334 else 335 if loader = cached_placeholder_literalizer(:_get_alias_loader) do |pl| 336 ds.single_value_ds.select(Sequel.as(pl.arg, :v)) 337 end 338 339 return loader.get(column) 340 end 341 342 ds = ds.select(Sequel.as(column, :v)) 343 end 344 end 345 346 if column.is_a?(Array) 347 if r = ds.single_record 348 r.values_at(*hash_key_symbols(column)) 349 end 350 else 351 ds.single_value 352 end 353 end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction if necessary.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]]) # INSERT INTO table (x, y) VALUES (1, 2) # INSERT INTO table (x, y) VALUES (3, 4)
or, if the database supports it:
# INSERT INTO table (x, y) VALUES (1, 2), (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b)) # INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
| :commit_every |
Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records. When a transaction is not required, this option controls the maximum number of values to insert with a single statement; it does not force the use of a transaction. |
| :return |
When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted. This does not have an effect if |
| :server |
Set the server/shard to use for the transaction and insert queries. |
| :skip_transaction |
Do not use a transaction even when using multiple INSERT queries. |
| :slice |
Same as :commit_every, :commit_every takes precedence. |
# File lib/sequel/dataset/actions.rb 391 def import(columns, values, opts=OPTS) 392 return insert(columns, values) if values.is_a?(Dataset) 393 394 return if values.empty? 395 raise(Error, 'Using Sequel::Dataset#import with an empty column array is not allowed') if columns.empty? 396 ds = opts[:server] ? server(opts[:server]) : self 397 398 if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice)) 399 offset = 0 400 rows = [] 401 while offset < values.length 402 rows << ds._import(columns, values[offset, slice_size], opts) 403 offset += slice_size 404 end 405 rows.flatten 406 else 407 ds._import(columns, values, opts) 408 end 409 end
Inserts values into the associated table. The returned value is generally the value of the autoincremented primary key for the inserted row, assuming that a single row is inserted and the table has an autoincrementing primary key.
insert handles a number of different argument formats:
| no arguments or single empty hash |
Uses |
| single hash |
Most common format, treats keys as columns and values as values |
| single array |
Treats entries as values, with no columns |
| two arrays |
Treats first array as columns, second array as values |
single Dataset |
Treats as an insert based on a selection from the dataset given, with no columns |
| array and dataset |
Treats as an insert based on a selection from the dataset given, with the columns given by the array. |
Examples:
DB[:items].insert # INSERT INTO items DEFAULT VALUES DB[:items].insert({}) # INSERT INTO items DEFAULT VALUES DB[:items].insert([1,2,3]) # INSERT INTO items VALUES (1, 2, 3) DB[:items].insert([:a, :b], [1,2]) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(a: 1, b: 2) # INSERT INTO items (a, b) VALUES (1, 2) DB[:items].insert(DB[:old_items]) # INSERT INTO items SELECT * FROM old_items DB[:items].insert([:a, :b], DB[:old_items]) # INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/actions.rb 447 def insert(*values, &block) 448 sql = insert_sql(*values) 449 if uses_returning?(:insert) 450 returning_fetch_rows(sql, &block) 451 else 452 execute_insert(sql) 453 end 454 end
Reverses the order and then runs first with the given arguments and block. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1 # => {:id=>10} DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2 # => [{:id=>1}, {:id=>2}]
# File lib/sequel/dataset/actions.rb 466 def last(*args, &block) 467 raise(Error, 'No order specified') unless @opts[:order] 468 reverse.first(*args, &block) 469 end
Maps column values for each record in the dataset (if an argument is given) or performs the stock mapping functionality of Enumerable otherwise. Raises an Error if both an argument and block are given.
DB[:table].map(:id) # SELECT * FROM table # => [1, 2, 3, ...] DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table # => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
# File lib/sequel/dataset/actions.rb 485 def map(column=nil, &block) 486 if column 487 raise(Error, 'Must call Dataset#map with either an argument or a block, not both') if block 488 return naked.map(column) if row_proc 489 if column.is_a?(Array) 490 super(){|r| r.values_at(*column)} 491 else 492 super(){|r| r[column]} 493 end 494 else 495 super(&block) 496 end 497 end
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1 # => 10 DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1 # => 7
# File lib/sequel/dataset/actions.rb 506 def max(arg=(no_arg = true), &block) 507 arg = Sequel.virtual_row(&block) if no_arg 508 _aggregate(:max, arg) 509 end
Execute a MERGE statement, which allows for INSERT, UPDATE, and DELETE behavior in a single query, based on whether rows from a source table match rows in the current table, based on the join conditions.
Unless the dataset uses static SQL, to use merge, you must first have called merge_using to specify the merge source and join conditions. You will then likely to call one or more of the following methods to specify MERGE behavior by adding WHEN [NOT] MATCHED clauses:
The WHEN [NOT] MATCHED clauses are added to the SQL in the order these methods were called on the dataset. If none of these methods are called, an error is raised.
Example:
DB[:m1] merge_using(:m2, i1: :i2). merge_insert(i1: :i2, a: Sequel[:b]+11). merge_delete{a > 30}. merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20). merge
SQL:
MERGE INTO m1 USING m2 ON (i1 = i2) WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11)) WHEN MATCHED AND (a > 30) THEN DELETE WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20)
On PostgreSQL, two additional merge methods are supported, for the PostgreSQL-specific DO NOTHING syntax.
-
merge_do_nothing_when_matched
-
merge_do_nothing_when_not_matched
This method is supported on Oracle, but Oracle’s MERGE support is non-standard, and has the following issues:
-
DELETE clause requires UPDATE clause
-
DELETE clause requires a condition
-
DELETE clause only affects rows updated by UPDATE clause
# File lib/sequel/dataset/actions.rb 556 def merge 557 execute_ddl(merge_sql) 558 end
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1 # => 1 DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1 # => 0
# File lib/sequel/dataset/actions.rb 567 def min(arg=(no_arg = true), &block) 568 arg = Sequel.virtual_row(&block) if no_arg 569 _aggregate(:min, arg) 570 end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{x: 1}, {x: 2}]) # INSERT INTO table (x) VALUES (1) # INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as import.
# File lib/sequel/dataset/actions.rb 584 def multi_insert(hashes, opts=OPTS) 585 return if hashes.empty? 586 columns = hashes.first.keys 587 import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 588 end
Yields each row in the dataset, but internally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.
Sequel checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, use a separate thread or shard inside paged_each.
Options:
| :rows_per_fetch |
The number of rows to fetch per query. Defaults to 1000. |
| :strategy |
The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some |
| :filter_values |
If the strategy: :filter option is used, this option should be a proc that accepts the last retrieved row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases. |
| :skip_transaction |
Do not use a transaction. This can be useful if you want to prevent a lock on the database table, at the expense of consistency. |
Examples:
DB[:table].order(:id).paged_each{|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000 # ... DB[:table].order(:id).paged_each(rows_per_fetch: 100){|row| } # SELECT * FROM table ORDER BY id LIMIT 100 # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100 # ... DB[:table].order(:id).paged_each(strategy: :filter){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ... DB[:table].order(:id).paged_each(strategy: :filter, filter_values: lambda{|row, exprs| [row[:id]]}){|row| } # SELECT * FROM table ORDER BY id LIMIT 1000 # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000 # ...
# File lib/sequel/dataset/actions.rb 647 def paged_each(opts=OPTS) 648 unless @opts[:order] 649 raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" 650 end 651 unless defined?(yield) 652 return enum_for(:paged_each, opts) 653 end 654 655 total_limit = @opts[:limit] 656 offset = @opts[:offset] 657 if server = @opts[:server] 658 opts = Hash[opts] 659 opts[:server] = server 660 end 661 662 rows_per_fetch = opts[:rows_per_fetch] || 1000 663 strategy = if offset || total_limit 664 :offset 665 else 666 opts[:strategy] || :offset 667 end 668 669 db.transaction(opts) do 670 case strategy 671 when :filter 672 filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}} 673 base_ds = ds = limit(rows_per_fetch) 674 while ds 675 last_row = nil 676 ds.each do |row| 677 last_row = row 678 yield row 679 end 680 ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row) 681 end 682 else 683 offset ||= 0 684 num_rows_yielded = rows_per_fetch 685 total_rows = 0 686 687 while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) 688 if total_limit && total_rows + rows_per_fetch > total_limit 689 rows_per_fetch = total_limit - total_rows 690 end 691 692 num_rows_yielded = 0 693 limit(rows_per_fetch, offset).each do |row| 694 num_rows_yielded += 1 695 total_rows += 1 if total_limit 696 yield row 697 end 698 699 offset += rows_per_fetch 700 end 701 end 702 end 703 704 self 705 end
Returns a hash with key_column values as keys and value_column values as values. Similar to as_hash, but only selects the columns given. Like as_hash, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash(:id, :name) # SELECT id, name FROM table # => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT id, foo, name, bar FROM table # => {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine.
# File lib/sequel/dataset/actions.rb 725 def select_hash(key_column, value_column, opts = OPTS) 726 _select_hash(:as_hash, key_column, value_column, opts) 727 end
Returns a hash with key_column values as keys and an array of value_column values. Similar to to_hash_groups, but only selects the columns given. Like to_hash_groups, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT first, middle, last, id FROM table # => {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine.
# File lib/sequel/dataset/actions.rb 746 def select_hash_groups(key_column, value_column, opts = OPTS) 747 _select_hash(:to_hash_groups, key_column, value_column, opts) 748 end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset.
DB[:table].select_map(:id) # SELECT id FROM table # => [3, 5, 8, 1, ...] DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table # => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.
# File lib/sequel/dataset/actions.rb 766 def select_map(column=nil, &block) 767 _select_map(column, false, &block) 768 end
The same as select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id # => [1, 2, 3, 4, ...] DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2) # => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.
# File lib/sequel/dataset/actions.rb 785 def select_order_map(column=nil, &block) 786 _select_map(column, true, &block) 787 end
Selects the column given (either as an argument or as a block), and returns a set of all values of that column in the dataset.
DB[:table].select_set(:id) # SELECT id FROM table # => Set[3, 5, 8, 1, ...] DB[:table].select_set{id * 2} # SELECT (id * 2) FROM table # => Set[6, 10, 16, 2, ...]
You can also provide an array of column names, which returns a set with array elements (not set elements):
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table # => Set[[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.
# File lib/sequel/dataset/actions.rb 806 def select_set(column=nil, &block) 807 ds = ungraphed.naked 808 columns = Array(column) 809 virtual_row_columns(columns, block) 810 if column.is_a?(Array) || (columns.length > 1) 811 ds.select(*columns)._select_set_multiple(hash_key_symbols(columns)) 812 else 813 ds.select(auto_alias_expression(columns.first))._select_set_single 814 end 815 end
Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records. Users should probably use first instead of this method. Example:
DB[:test].single_record # SELECT * FROM test LIMIT 1 # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb 823 def single_record 824 _single_record_ds.single_record! 825 end
Returns the first record in dataset, without limiting the dataset. Returns nil if the dataset has no records. Users should probably use first instead of this method. This should only be used if you know the dataset is already limited to a single record. This method may be desirable to use for performance reasons, as it does not clone the receiver. Example:
DB[:test].single_record! # SELECT * FROM test # => {:column_name=>'value'}
# File lib/sequel/dataset/actions.rb 835 def single_record! 836 with_sql_first(select_sql) 837 end
Returns the first value of the first record in the dataset. Returns nil if dataset is empty. Users should generally use get instead of this method. Example:
DB[:test].single_value # SELECT * FROM test LIMIT 1 # => 'value'
# File lib/sequel/dataset/actions.rb 845 def single_value 846 single_value_ds.each do |r| 847 r.each{|_, v| return v} 848 end 849 nil 850 end
Returns the first value of the first record in the dataset, without limiting the dataset. Returns nil if the dataset is empty. Users should generally use get instead of this method. Should not be used on graphed datasets or datasets that have row_procs that don’t return hashes. This method may be desirable to use for performance reasons, as it does not clone the receiver.
DB[:test].single_value! # SELECT * FROM test # => 'value'
# File lib/sequel/dataset/actions.rb 860 def single_value! 861 with_sql_single_value(select_sql) 862 end
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1 # => 55 DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1 # => 10
# File lib/sequel/dataset/actions.rb 871 def sum(arg=(no_arg = true), &block) 872 arg = Sequel.virtual_row(&block) if no_arg 873 _aggregate(:sum, arg) 874 end
Alias of as_hash for backwards compatibility.
# File lib/sequel/dataset/actions.rb 928 def to_hash(*a) 929 as_hash(*a) 930 end
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...} DB[:table].to_hash_groups(:name) # SELECT * FROM table # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...} DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
Options:
| :all |
Use all instead of each to retrieve the objects |
| :hash |
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc. |
# File lib/sequel/dataset/actions.rb 956 def to_hash_groups(key_column, value_column = nil, opts = OPTS) 957 h = opts[:hash] || {} 958 meth = opts[:all] ? :all : :each 959 if value_column 960 return naked.to_hash_groups(key_column, value_column, opts) if row_proc 961 if value_column.is_a?(Array) 962 if key_column.is_a?(Array) 963 public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} 964 else 965 public_send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} 966 end 967 else 968 if key_column.is_a?(Array) 969 public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} 970 else 971 public_send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]} 972 end 973 end 974 elsif key_column.is_a?(Array) 975 public_send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r} 976 else 977 public_send(meth){|r| (h[r[key_column]] ||= []) << r} 978 end 979 h 980 end
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table # => nil
# File lib/sequel/dataset/actions.rb 986 def truncate 987 execute_ddl(truncate_sql) 988 end
Updates values for the dataset. The returned value is the number of rows updated. values should be a hash where the keys are columns to set and values are the values to which to set the columns.
DB[:table].update(x: nil) # UPDATE table SET x = NULL # => 10 DB[:table].update(x: Sequel[:x]+1, y: 0) # UPDATE table SET x = (x + 1), y = 0 # => 10
Some databases support using multiple tables in an UPDATE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>10}}. update(:f=>Sequel[:c][:h]) # UPDATE a # SET f = c.h # FROM b # INNER JOIN c ON (c.d = b.e) # WHERE ((a.f = b.g) AND (a.id = 10))
# File lib/sequel/dataset/actions.rb 1012 def update(values=OPTS, &block) 1013 sql = update_sql(values) 1014 if uses_returning?(:update) 1015 returning_fetch_rows(sql, &block) 1016 else 1017 execute_dui(sql) 1018 end 1019 end
Return an array of all rows matching the given filter condition, also yielding each row to the given block. Basically the same as where(cond).all(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_all(id: [1,2,3]) # SELECT * FROM table WHERE (id IN (1, 2, 3))
# File lib/sequel/dataset/actions.rb 1027 def where_all(cond, &block) 1028 if loader = _where_loader([cond], nil) 1029 loader.all(filter_expr(cond), &block) 1030 else 1031 where(cond).all(&block) 1032 end 1033 end
Iterate over all rows matching the given filter condition, yielding each row to the given block. Basically the same as where(cond).each(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_each(id: [1,2,3]){|row| p row} # SELECT * FROM table WHERE (id IN (1, 2, 3))
# File lib/sequel/dataset/actions.rb 1041 def where_each(cond, &block) 1042 if loader = _where_loader([cond], nil) 1043 loader.each(filter_expr(cond), &block) 1044 else 1045 where(cond).each(&block) 1046 end 1047 end
Filter the datasets using the given filter condition, then return a single value. This assumes that the dataset has already been setup to limit the selection to a single column. Basically the same as where(cond).single_value, except it can be optimized to not create an intermediate dataset.
DB[:table].select(:name).where_single_value(id: 1) # SELECT name FROM table WHERE (id = 1) LIMIT 1
# File lib/sequel/dataset/actions.rb 1056 def where_single_value(cond) 1057 if loader = cached_where_placeholder_literalizer([cond], nil, :_where_single_value_loader) do |pl| 1058 single_value_ds.where(pl.arg) 1059 end 1060 1061 loader.get(filter_expr(cond)) 1062 else 1063 where(cond).single_value 1064 end 1065 end
Run the given SQL and return an array of all rows. If a block is given, each row is yielded to the block after all rows are loaded. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1069 def with_sql_all(sql, &block) 1070 _all(block){|a| with_sql_each(sql){|r| a << r}} 1071 end
Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing with_sql(sql).delete. It’s significantly faster as it does not require cloning the current dataset.
# File lib/sequel/dataset/actions.rb 1076 def with_sql_delete(sql) 1077 execute_dui(sql) 1078 end
Run the given SQL and yield each returned row to the block.
# File lib/sequel/dataset/actions.rb 1082 def with_sql_each(sql) 1083 if rp = row_proc 1084 _with_sql_dataset.fetch_rows(sql){|r| yield rp.call(r)} 1085 else 1086 _with_sql_dataset.fetch_rows(sql){|r| yield r} 1087 end 1088 self 1089 end
Run the given SQL and return the first row, or nil if no rows were returned. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1093 def with_sql_first(sql) 1094 with_sql_each(sql){|r| return r} 1095 nil 1096 end
Execute the given SQL and (on most databases) return the primary key of the inserted row.
# File lib/sequel/dataset/actions.rb 1109 def with_sql_insert(sql) 1110 execute_insert(sql) 1111 end
Run the given SQL and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL given should select only a single value. See with_sql_each.
# File lib/sequel/dataset/actions.rb 1101 def with_sql_single_value(sql) 1102 if r = with_sql_first(sql) 1103 r.each{|_, v| return v} 1104 end 1105 end
Protected Instance methods
Internals of import. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql and execute each statement it gives separately. A transaction is only used if there are multiple statements to execute.
# File lib/sequel/dataset/actions.rb 1119 def _import(columns, values, opts) 1120 trans_opts = Hash[opts] 1121 trans_opts[:server] = @opts[:server] 1122 if opts[:return] == :primary_key 1123 _import_transaction(values, trans_opts){values.map{|v| insert(columns, v)}} 1124 else 1125 stmts = multi_insert_sql(columns, values) 1126 _import_transaction(stmts, trans_opts){stmts.each{|st| execute_dui(st)}} 1127 end 1128 end
Return an array of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb 1131 def _select_map_multiple(ret_cols) 1132 map{|r| r.values_at(*ret_cols)} 1133 end
Returns an array of the first value in each row.
# File lib/sequel/dataset/actions.rb 1136 def _select_map_single 1137 k = nil 1138 map{|r| r[k||=r.keys.first]} 1139 end
Return a set of arrays of values given by the symbols in ret_cols.
# File lib/sequel/dataset/actions.rb 1142 def _select_set_multiple(ret_cols) 1143 to_set{|r| r.values_at(*ret_cols)} 1144 end
Returns a set of the first value in each row.
# File lib/sequel/dataset/actions.rb 1147 def _select_set_single 1148 k = nil 1149 to_set{|r| r[k||=r.keys.first]} 1150 end
A dataset for returning single values from the current dataset.
# File lib/sequel/dataset/actions.rb 1153 def single_value_ds 1154 clone(:limit=>1).ungraphed.naked 1155 end
3 - User Methods relating to SQL Creation
Public Instance methods
Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
DB.select(1).where(DB[:items].exists) # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
# File lib/sequel/dataset/sql.rb 14 def exists 15 SQL::PlaceholderLiteralString.new(EXISTS, [self], true) 16 end
Returns an INSERT SQL query string. See insert.
DB[:items].insert_sql(a: 1) # => "INSERT INTO items (a) VALUES (1)"
# File lib/sequel/dataset/sql.rb 22 def insert_sql(*values) 23 return static_sql(@opts[:sql]) if @opts[:sql] 24 25 check_insert_allowed! 26 27 columns, values = _parse_insert_sql_args(values) 28 if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? 29 columns, values = insert_empty_columns_values 30 elsif values.is_a?(Dataset) && hoist_cte?(values) && supports_cte?(:insert) 31 ds, values = hoist_cte(values) 32 return ds.clone(:columns=>columns, :values=>values).send(:_insert_sql) 33 end 34 clone(:columns=>columns, :values=>values).send(:_insert_sql) 35 end
Append a literal representation of a value to the given SQL string.
If an unsupported object is given, an Error is raised.
# File lib/sequel/dataset/sql.rb 40 def literal_append(sql, v) 41 case v 42 when Symbol 43 if skip_symbol_cache? 44 literal_symbol_append(sql, v) 45 else 46 unless l = db.literal_symbol(v) 47 l = String.new 48 literal_symbol_append(l, v) 49 db.literal_symbol_set(v, l) 50 end 51 sql << l 52 end 53 when String 54 case v 55 when LiteralString 56 sql << v 57 when SQL::Blob 58 literal_blob_append(sql, v) 59 else 60 literal_string_append(sql, v) 61 end 62 when Integer 63 sql << literal_integer(v) 64 when Hash 65 literal_hash_append(sql, v) 66 when SQL::Expression 67 literal_expression_append(sql, v) 68 when Float 69 sql << literal_float(v) 70 when BigDecimal 71 sql << literal_big_decimal(v) 72 when NilClass 73 sql << literal_nil 74 when TrueClass 75 sql << literal_true 76 when FalseClass 77 sql << literal_false 78 when Array 79 literal_array_append(sql, v) 80 when Time 81 v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v) 82 when DateTime 83 literal_datetime_append(sql, v) 84 when Date 85 literal_date_append(sql, v) 86 when Dataset 87 literal_dataset_append(sql, v) 88 when Set 89 literal_set_append(sql, v) 90 else 91 literal_other_append(sql, v) 92 end 93 end
Literalize a date or time value, as a SQL string value with no typecasting. If raw is true, remove the surrounding single quotes. This is designed for usage by bound argument code that can work even if the auto_cast_date_and_time extension is used (either manually or implicitly in the related adapter).
# File lib/sequel/dataset/sql.rb 125 def literal_date_or_time(dt, raw=false) 126 value = case dt 127 when SQLTime 128 literal_sqltime(dt) 129 when Time 130 literal_time(dt) 131 when DateTime 132 literal_datetime(dt) 133 when Date 134 literal_date(dt) 135 else 136 raise TypeError, "unsupported type: #{dt.inspect}" 137 end 138 139 if raw 140 value.sub!(/\A'/, '') 141 value.sub!(/'\z/, '') 142 end 143 144 value 145 end
The SQL to use for the MERGE statement.
# File lib/sequel/dataset/sql.rb 96 def merge_sql 97 raise Error, "This database doesn't support MERGE" unless supports_merge? 98 if sql = opts[:sql] 99 return static_sql(sql) 100 end 101 if sql = cache_get(:_merge_sql) 102 return sql 103 end 104 source, join_condition = @opts[:merge_using] 105 raise Error, "No USING clause for MERGE" unless source 106 sql = @opts[:append_sql] || sql_string_origin 107 108 select_with_sql(sql) 109 sql << "MERGE INTO " 110 source_list_append(sql, @opts[:from]) 111 sql << " USING " 112 identifier_append(sql, source) 113 sql << " ON " 114 literal_append(sql, join_condition) 115 _merge_when_sql(sql) 116 cache_set(:_merge_sql, sql) if cache_sql? 117 sql 118 end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
# File lib/sequel/dataset/sql.rb 150 def multi_insert_sql(columns, values) 151 case multi_insert_sql_strategy 152 when :values 153 sql = LiteralString.new('VALUES ') 154 expression_list_append(sql, values.map{|r| Array(r)}) 155 [insert_sql(columns, sql)] 156 when :union 157 c = false 158 sql = LiteralString.new 159 u = ' UNION ALL SELECT ' 160 f = empty_from_sql 161 values.each do |v| 162 if c 163 sql << u 164 else 165 sql << 'SELECT ' 166 c = true 167 end 168 expression_list_append(sql, v) 169 sql << f if f 170 end 171 [insert_sql(columns, sql)] 172 else 173 values.map{|r| insert_sql(columns, r)} 174 end 175 end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb 178 def sql 179 select_sql 180 end
Returns a TRUNCATE SQL query string. See truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
# File lib/sequel/dataset/sql.rb 185 def truncate_sql 186 if opts[:sql] 187 static_sql(opts[:sql]) 188 else 189 check_truncation_allowed! 190 check_not_limited!(:truncate) 191 raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] 192 t = String.new 193 source_list_append(t, opts[:from]) 194 _truncate_sql(t) 195 end 196 end
Formats an UPDATE statement using the given values. See update.
DB[:items].update_sql(price: 100, category: 'software') # => "UPDATE items SET price = 100, category = 'software'
Raises an Error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb 205 def update_sql(values = OPTS) 206 return static_sql(opts[:sql]) if opts[:sql] 207 check_update_allowed! 208 check_not_limited!(:update) 209 210 case values 211 when LiteralString 212 # nothing 213 when String 214 raise Error, "plain string passed to Dataset#update is not supported, use Sequel.lit to use a literal string" 215 end 216 217 clone(:values=>values).send(:_update_sql) 218 end
4 - Methods that describe what the dataset supports
Public Instance methods
Whether this dataset will provide accurate number of rows matched for delete and update statements, true by default. Accurate in this case is the number of rows matched by the dataset’s filter.
# File lib/sequel/dataset/features.rb 19 def provides_accurate_rows_matched? 20 true 21 end
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb 12 def quote_identifiers? 13 @opts.fetch(:quote_identifiers, true) 14 end
Whether you must use a column alias list for recursive CTEs, false by default.
# File lib/sequel/dataset/features.rb 24 def recursive_cte_requires_column_aliases? 25 false 26 end
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer), false by default.
# File lib/sequel/dataset/features.rb 41 def requires_placeholder_type_specifiers? 42 false 43 end
Whether the dataset requires SQL standard datetimes. False by default, as most allow strings with ISO 8601 format. Only for backwards compatibility, no longer used internally, do not use in new code.
# File lib/sequel/dataset/features.rb 33 def requires_sql_standard_datetimes? 34 # SEQUEL6: Remove 35 false 36 end
Whether the dataset supports common table expressions, false by default. If given, type can be :select, :insert, :update, or :delete, in which case it determines whether WITH is supported for the respective statement type.
# File lib/sequel/dataset/features.rb 48 def supports_cte?(type=:select) 49 false 50 end
Whether the dataset supports common table expressions in subqueries, false by default. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
# File lib/sequel/dataset/features.rb 55 def supports_cte_in_subqueries? 56 false 57 end
Whether deleting from joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 60 def supports_deleting_joins? 61 supports_modifying_joins? 62 end
Whether the database supports derived column lists (e.g. “table_expr AS table_alias(column_alias1, column_alias2, …)”), true by default.
# File lib/sequel/dataset/features.rb 67 def supports_derived_column_lists? 68 true 69 end
Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.
# File lib/sequel/dataset/features.rb 72 def supports_distinct_on? 73 false 74 end
Whether the dataset supports CUBE with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 77 def supports_group_cube? 78 false 79 end
Whether the dataset supports ROLLUP with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 82 def supports_group_rollup? 83 false 84 end
Whether the dataset supports GROUPING SETS with GROUP BY, false by default.
# File lib/sequel/dataset/features.rb 87 def supports_grouping_sets? 88 false 89 end
Whether this dataset supports the insert_select method for returning all columns values directly from an insert query, false by default.
# File lib/sequel/dataset/features.rb 93 def supports_insert_select? 94 supports_returning?(:insert) 95 end
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
# File lib/sequel/dataset/features.rb 98 def supports_intersect_except? 99 true 100 end
Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default.
# File lib/sequel/dataset/features.rb 103 def supports_intersect_except_all? 104 true 105 end
Whether the dataset supports the IS TRUE syntax, true by default.
# File lib/sequel/dataset/features.rb 108 def supports_is_true? 109 true 110 end
Whether the dataset supports the JOIN table USING (column1, …) syntax, true by default. If false, support is emulated using JOIN table ON (table.column1 = other_table.column1).
# File lib/sequel/dataset/features.rb 114 def supports_join_using? 115 true 116 end
Whether the dataset supports LATERAL for subqueries in the FROM or JOIN clauses, false by default.
# File lib/sequel/dataset/features.rb 119 def supports_lateral_subqueries? 120 false 121 end
Whether the MERGE statement is supported, false by default.
# File lib/sequel/dataset/features.rb 134 def supports_merge? 135 false 136 end
Whether modifying joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 139 def supports_modifying_joins? 140 false 141 end
Whether the IN/NOT IN operators support multiple columns when an array of values is given, true by default.
# File lib/sequel/dataset/features.rb 145 def supports_multiple_column_in? 146 true 147 end
Whether the dataset supports skipping raising an error instead of waiting for locked rows when returning data, false by default.
# File lib/sequel/dataset/features.rb 129 def supports_nowait? 130 false 131 end
Whether the dataset supports or can fully emulate the DISTINCT ON clause, including respecting the ORDER BY clause, false by default.
# File lib/sequel/dataset/features.rb 156 def supports_ordered_distinct_on? 157 supports_distinct_on? 158 end
Whether placeholder literalizers are supported, true by default.
# File lib/sequel/dataset/features.rb 161 def supports_placeholder_literalizer? 162 true 163 end
Whether the dataset supports pattern matching by regular expressions, false by default.
# File lib/sequel/dataset/features.rb 166 def supports_regexp? 167 false 168 end
Whether the dataset supports REPLACE syntax, false by default.
# File lib/sequel/dataset/features.rb 171 def supports_replace? 172 false 173 end
Whether the RETURNING clause is supported for the given type of query, false by default. type can be :insert, :update, or :delete.
# File lib/sequel/dataset/features.rb 177 def supports_returning?(type) 178 false 179 end
Whether the database supports SELECT *, column FROM table, true by default.
# File lib/sequel/dataset/features.rb 187 def supports_select_all_and_column? 188 true 189 end
Whether the dataset supports skipping locked rows when returning data, false by default.
# File lib/sequel/dataset/features.rb 182 def supports_skip_locked? 183 false 184 end
Whether the dataset supports timezones in literal timestamps, false by default.
# File lib/sequel/dataset/features.rb 194 def supports_timestamp_timezones? 195 # SEQUEL6: Remove 196 false 197 end
Whether the dataset supports fractional seconds in literal timestamps, true by default.
# File lib/sequel/dataset/features.rb 201 def supports_timestamp_usecs? 202 true 203 end
Whether updating joined datasets is supported, false by default.
# File lib/sequel/dataset/features.rb 206 def supports_updating_joins? 207 supports_modifying_joins? 208 end
Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that that use 1 for true), true by default.
# File lib/sequel/dataset/features.rb 235 def supports_where_true? 236 true 237 end
Whether the dataset supports the WINDOW clause to define windows used by multiple window functions, false by default.
# File lib/sequel/dataset/features.rb 212 def supports_window_clause? 213 false 214 end
Whether the dataset supports the given window function option. True by default. This should only be called if supports_window_functions? is true. Possible options are :rows, :range, :groups, :offset, :exclude.
# File lib/sequel/dataset/features.rb 224 def supports_window_function_frame_option?(option) 225 case option 226 when :rows, :range, :offset 227 true 228 else 229 false 230 end 231 end
Whether the dataset supports window functions, false by default.
# File lib/sequel/dataset/features.rb 217 def supports_window_functions? 218 false 219 end
5 - Methods related to dataset graphing
Public Instance methods
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list (the equivalent of select_append when graphing). See set_graph_aliases.
DB[:table].add_graph_aliases(some_alias: [:table, :column]) # SELECT ..., table.column AS some_alias
# File lib/sequel/dataset/graph.rb 18 def add_graph_aliases(graph_aliases) 19 graph = opts[:graph] 20 unless (graph && (ga = graph[:column_aliases])) 21 raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases" 22 end 23 columns, graph_aliases = graph_alias_columns(graph_aliases) 24 select_append(*columns).clone(:graph => graph.merge(:column_aliases=>ga.merge(graph_aliases).freeze).freeze) 25 end
Similar to Dataset#join_table, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.
Arguments:
| dataset |
Can be a symbol (specifying a table), another dataset, or an |
| join_conditions |
Any condition(s) allowed by |
| block |
A block that is passed to |
Options:
| :from_self_alias |
The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a |
| :implicit_qualifier |
The qualifier of implicit conditions, see |
| :join_only |
Only join the tables, do not change the selected columns. |
| :join_type |
The type of join to use (passed to |
| :qualify |
The type of qualification to do, see |
| :select |
An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use |
| :table_alias |
The alias to use for the table. If not specified, doesn’t alias the table. You will get an error if the alias (or table) name is used more than once. |
# File lib/sequel/dataset/graph.rb 53 def graph(dataset, join_conditions = nil, options = OPTS, &block) 54 # Allow the use of a dataset or symbol as the first argument 55 # Find the table name/dataset based on the argument 56 table_alias = options[:table_alias] 57 table = dataset 58 create_dataset = true 59 60 case dataset 61 when Symbol 62 # let alias be the same as the table name (sans any optional schema) 63 # unless alias explicitly given in the symbol using ___ notation and symbol splitting is enabled 64 table_alias ||= split_symbol(table).compact.last 65 when Dataset 66 if dataset.simple_select_all? 67 table = dataset.opts[:from].first 68 table_alias ||= table 69 else 70 table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 71 end 72 create_dataset = false 73 when SQL::Identifier 74 table_alias ||= table.value 75 when SQL::QualifiedIdentifier 76 table_alias ||= split_qualifiers(table).last 77 when SQL::AliasedExpression 78 return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge!(options), &block) 79 else 80 raise Error, "The dataset argument should be a symbol or dataset" 81 end 82 table_alias = table_alias.to_sym 83 84 if create_dataset 85 dataset = db.from(table) 86 end 87 88 # Raise Sequel::Error with explanation that the table alias has been used 89 raise_alias_error = lambda do 90 raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 91 "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 92 end 93 94 # Only allow table aliases that haven't been used 95 raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 96 97 table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table) 98 implicit_qualifier = options[:implicit_qualifier] 99 joined_dataset = joined_dataset? 100 ds = self 101 graph = opts[:graph] 102 103 if !graph && (select = @opts[:select]) && !select.empty? 104 select_columns = nil 105 106 unless !joined_dataset && select.length == 1 && (select[0].is_a?(SQL::ColumnAll)) 107 force_from_self = false 108 select_columns = select.map do |sel| 109 unless col = _hash_key_symbol(sel) 110 force_from_self = true 111 break 112 end 113 114 [sel, col] 115 end 116 117 select_columns = nil if force_from_self 118 end 119 end 120 121 # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs) 122 if (@opts[:graph_from_self] != false && !graph && (joined_dataset || force_from_self)) 123 from_selfed = true 124 implicit_qualifier = options[:from_self_alias] || first_source 125 ds = ds.from_self(:alias=>implicit_qualifier) 126 end 127 128 # Join the table early in order to avoid cloning the dataset twice 129 ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>options[:qualify], &block) 130 131 return ds if options[:join_only] 132 133 opts = ds.opts 134 135 # Whether to include the table in the result set 136 add_table = options[:select] == false ? false : true 137 138 if graph 139 graph = graph.dup 140 select = opts[:select].dup 141 [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup} 142 else 143 # Setup the initial graph data structure if it doesn't exist 144 qualifier = ds.first_source_alias 145 master = alias_symbol(qualifier) 146 raise_alias_error.call if master == table_alias 147 148 # Master hash storing all .graph related information 149 graph = {} 150 151 # Associates column aliases back to tables and columns 152 column_aliases = graph[:column_aliases] = {} 153 154 # Associates table alias (the master is never aliased) 155 table_aliases = graph[:table_aliases] = {master=>self} 156 157 # Keep track of the alias numbers used 158 ca_num = graph[:column_alias_num] = Hash.new(0) 159 160 select = if select_columns 161 select_columns.map do |sel, column| 162 column_aliases[column] = [master, column] 163 if from_selfed 164 # Initial dataset was wrapped in subselect, selected all 165 # columns in the subselect, qualified by the subselect alias. 166 Sequel.qualify(qualifier, Sequel.identifier(column)) 167 else 168 # Initial dataset not wrapped in subslect, just make 169 # sure columns are qualified in some way. 170 qualified_expression(sel, qualifier) 171 end 172 end 173 else 174 columns.map do |column| 175 column_aliases[column] = [master, column] 176 SQL::QualifiedIdentifier.new(qualifier, column) 177 end 178 end 179 end 180 181 # Add the table alias to the list of aliases 182 # Even if it isn't been used in the result set, 183 # we add a key for it with a nil value so we can check if it 184 # is used more than once 185 table_aliases = graph[:table_aliases] 186 table_aliases[table_alias] = add_table ? dataset : nil 187 188 # Add the columns to the selection unless we are ignoring them 189 if add_table 190 column_aliases = graph[:column_aliases] 191 ca_num = graph[:column_alias_num] 192 # Which columns to add to the result set 193 cols = options[:select] || dataset.columns 194 # If the column hasn't been used yet, don't alias it. 195 # If it has been used, try table_column. 196 # If that has been used, try table_column_N 197 # using the next value of N that we know hasn't been 198 # used 199 cols.each do |column| 200 col_alias, identifier = if column_aliases[column] 201 column_alias = :"#{table_alias}_#{column}" 202 if column_aliases[column_alias] 203 column_alias_num = ca_num[column_alias] 204 column_alias = :"#{column_alias}_#{column_alias_num}" 205 ca_num[column_alias] += 1 206 end 207 [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)] 208 else 209 ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column) 210 [column, ident] 211 end 212 column_aliases[col_alias] = [table_alias, column].freeze 213 select.push(identifier) 214 end 215 end 216 [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k].freeze} 217 ds = ds.clone(:graph=>graph.freeze) 218 ds.select(*select) 219 end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select for a graphed dataset, and must be used instead of select whenever graphing is used.
graph_aliases should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name.
DB[:artists].graph(:albums, :artist_id: :id).
set_graph_aliases(name: :artists,
album_name: [:albums, :name],
forty_two: [:albums, :fourtwo, 42]).first
# SELECT artists.name, albums.name AS album_name, 42 AS forty_two ...
# File lib/sequel/dataset/graph.rb 244 def set_graph_aliases(graph_aliases) 245 columns, graph_aliases = graph_alias_columns(graph_aliases) 246 if graph = opts[:graph] 247 select(*columns).clone(:graph => graph.merge(:column_aliases=>graph_aliases.freeze).freeze) 248 else 249 raise Error, "cannot call #set_graph_aliases on an ungraphed dataset" 250 end 251 end
Remove the splitting of results into subhashes, and all metadata related to the current graph (if any).
# File lib/sequel/dataset/graph.rb 255 def ungraphed 256 return self unless opts[:graph] 257 clone(:graph=>nil) 258 end
6 - Miscellaneous methods
Attributes
| cache | [R] |
Access the cache for the current dataset. Should be used with caution, as access to the cache is not thread safe without a mutex if other threads can reference the dataset. Symbol keys prefixed with an underscore are reserved for internal use. |
| db | [R] |
The database related to this dataset. This is the |
| opts | [R] |
The hash of options for this dataset, keys are symbols. |
Public Class methods
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Database#dataset method return an instance of that subclass.
# File lib/sequel/dataset/misc.rb 25 def initialize(db) 26 @db = db 27 @opts = OPTS 28 @cache = {} 29 freeze 30 end
Public Instance methods
Define a hash value such that datasets with the same class, DB, and opts will be considered equal.
# File lib/sequel/dataset/misc.rb 34 def ==(o) 35 o.is_a?(self.class) && db == o.db && opts == o.opts 36 end
An object representing the current date or time, should be an instance of Sequel.datetime_class.
# File lib/sequel/dataset/misc.rb 40 def current_datetime 41 Sequel.datetime_class.now 42 end
Return self, as datasets are always frozen.
# File lib/sequel/dataset/misc.rb 50 def dup 51 self 52 end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(key: 'setting').each_server{|ds| ds.update(value: 'new_value')}
# File lib/sequel/dataset/misc.rb 59 def each_server 60 db.servers.each{|s| yield server(s)} 61 end
Alias for ==
# File lib/sequel/dataset/misc.rb 45 def eql?(o) 46 self == o 47 end
Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:
ds.escape_like("foo\\%_") # 'foo\\\%\_'
# File lib/sequel/dataset/misc.rb 68 def escape_like(string) 69 string.gsub(/[\\%_]/){|m| "\\#{m}"} 70 end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb 91 def first_source 92 first_source_alias 93 end
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an Error. If the table is aliased, returns the aliased name.
DB[:table].first_source_alias # => :table DB[Sequel[:table].as(:t)].first_source_alias # => :t
# File lib/sequel/dataset/misc.rb 103 def first_source_alias 104 source = @opts[:from] 105 if source.nil? || source.empty? 106 raise Error, 'No source specified for query' 107 end 108 case s = source.first 109 when SQL::AliasedExpression 110 s.alias 111 when Symbol 112 _, _, aliaz = split_symbol(s) 113 aliaz ? aliaz.to_sym : s 114 else 115 s 116 end 117 end
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table # => :table DB[Sequel[:table].as(:t)].first_source_table # => :table
# File lib/sequel/dataset/misc.rb 128 def first_source_table 129 source = @opts[:from] 130 if source.nil? || source.empty? 131 raise Error, 'No source specified for query' 132 end 133 case s = source.first 134 when SQL::AliasedExpression 135 s.expression 136 when Symbol 137 sch, table, aliaz = split_symbol(s) 138 aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s 139 else 140 s 141 end 142 end
Freeze the opts when freezing the dataset.
# File lib/sequel/dataset/misc.rb 74 def freeze 75 @opts.freeze 76 super 77 end
Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.
# File lib/sequel/dataset/misc.rb 146 def hash 147 [self.class, db, opts].hash 148 end
Returns a string representation of the dataset including the class name and the corresponding SQL select statement.
# File lib/sequel/dataset/misc.rb 152 def inspect 153 "#<#{visible_class_name}: #{sql.inspect}>" 154 end
Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
# File lib/sequel/dataset/misc.rb 157 def joined_dataset? 158 !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]) 159 end
The class to use for placeholder literalizers for the current dataset.
# File lib/sequel/dataset/misc.rb 162 def placeholder_literalizer_class 163 ::Sequel::Dataset::PlaceholderLiteralizer 164 end
A placeholder literalizer loader for the current dataset.
# File lib/sequel/dataset/misc.rb 167 def placeholder_literalizer_loader(&block) 168 placeholder_literalizer_class.loader(self, &block) 169 end
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies
# File lib/sequel/dataset/misc.rb 173 def row_number_column 174 :x_sequel_row_number_x 175 end
The row_proc for this database, should be any object that responds to call with a single hash argument and returns the object you want each to return.
# File lib/sequel/dataset/misc.rb 179 def row_proc 180 @opts[:row_proc] 181 end
Splits a possible implicit alias in c, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.
# File lib/sequel/dataset/misc.rb 186 def split_alias(c) 187 case c 188 when Symbol 189 c_table, column, aliaz = split_symbol(c) 190 [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] 191 when SQL::AliasedExpression 192 [c.expression, c.alias] 193 when SQL::JoinClause 194 [c.table, c.table_alias] 195 else 196 [c, nil] 197 end 198 end
This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression containing one of those. In other cases, this returns nil.
# File lib/sequel/dataset/misc.rb 205 def unqualified_column_for(v) 206 unless v.is_a?(String) 207 _unqualified_column_for(v) 208 end 209 end
Creates a unique table alias that hasn’t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with “_N” if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t) # => :t DB[:table].unused_table_alias(:table) # => :table_0 DB[:table, :table_0].unused_table_alias(:table) # => :table_1 DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2]) # => :table_3
# File lib/sequel/dataset/misc.rb 233 def unused_table_alias(table_alias, used_aliases = []) 234 table_alias = alias_symbol(table_alias) 235 used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] 236 used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] 237 if used_aliases.include?(table_alias) 238 i = 0 239 while true 240 ta = :"#{table_alias}_#{i}" 241 return ta unless used_aliases.include?(ta) 242 i += 1 243 end 244 else 245 table_alias 246 end 247 end
Return a modified dataset with quote_identifiers set.
# File lib/sequel/dataset/misc.rb 250 def with_quote_identifiers(v) 251 clone(:quote_identifiers=>v, :skip_symbol_cache=>true) 252 end
Protected Instance methods
The cached columns for the current dataset.
# File lib/sequel/dataset/misc.rb 281 def _columns 282 cache_get(:_columns) 283 end
Retreive a value from the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb 263 def cache_get(k) 264 Sequel.synchronize{@cache[k]} 265 end
Set a value in the dataset’s cache in a thread safe manner.
# File lib/sequel/dataset/misc.rb 268 def cache_set(k, v) 269 Sequel.synchronize{@cache[k] = v} 270 end
Clear the columns hash for the current dataset. This is not a thread safe operation, so it should only be used if the dataset could not be used by another thread (such as one that was just created via clone).
# File lib/sequel/dataset/misc.rb 276 def clear_columns_cache 277 @cache.delete(:_columns) 278 end
8 - Methods related to prepared statements or bound variables
Constants
| DEFAULT_PREPARED_STATEMENT_MODULE_METHODS | = | %w'execute execute_dui execute_insert'.freeze.each(&:freeze) | ||
| PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze | ||
| PREPARED_STATEMENT_MODULE_CODE | = | { :bind => "opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze, :prepare => "sql = prepared_statement_name".freeze, :prepare_bind => "sql = prepared_statement_name; opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze }.freeze |
Public Instance methods
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
DB[:table].where(id: :$id).bind(id: 1).call(:first) # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1) # => {:id=>1}
# File lib/sequel/dataset/prepared_statements.rb 346 def bind(bind_vars=OPTS) 347 bind_vars = if bv = @opts[:bind_vars] 348 bv.merge(bind_vars).freeze 349 else 350 if bind_vars.frozen? 351 bind_vars 352 else 353 Hash[bind_vars] 354 end 355 end 356 357 clone(:bind_vars=>bind_vars) 358 end
For the given type, run the sql with the bind variables specified in the hash. values is a hash passed to insert or update (if one of those types is used), which may contain placeholders.
The following types are supported:
-
:select, :all, :each, :first, :single_value, :insert, :insert_select, :insert_pk, :update, :delete
-
Array where first element is :map, :as_hash, :to_hash, :to_hash_groups (remaining elements are passed to the related method)
DB.where(id: :$id).call(:first, id: 1) # SELECT * FROM table WHERE id = ? LIMIT 1 – (1) # => {:id=>1}
DB.where(id: :$id).call(:update, {c: 1, id: 2}, col: :$c) # UPDATE table WHERE id = ? SET col = ? – (2, 1) # => 1
# File lib/sequel/dataset/prepared_statements.rb 377 def call(type, bind_variables=OPTS, *values, &block) 378 to_prepared_statement(type, values, :extend=>bound_variable_modules).call(bind_variables, &block) 379 end
Prepare an SQL statement for later execution. Takes a type similar to call, and the name symbol of the prepared statement.
This returns a clone of the dataset extended with PreparedStatementMethods, which you can call with the hash of bind variables to use. The prepared statement is also stored in the associated Database, where it can be called by name. The following usage is identical:
ps = DB[:table].where(name: :$name).prepare(:first, :select_by_name) ps.call(name: 'Blah') # SELECT * FROM table WHERE name = ? -- ('Blah') # => {:id=>1, :name=>'Blah'} DB.call(:select_by_name, name: 'Blah') # Same thing
values given are passed to insert or update if they are used:
ps = DB[:table].where(id: :$i).prepare(:update, :update_name, name: :$n) ps.call(i: 1, n: 'Blah') # UPDATE table WHERE id = ? SET name = ? -- (1, 'Blah') # => 1
# File lib/sequel/dataset/prepared_statements.rb 405 def prepare(type, name, *values) 406 ps = to_prepared_statement(type, values, :name=>name, :extend=>prepared_statement_modules, :no_delayed_evaluations=>true) 407 408 ps = if ps.send(:emulate_prepared_statements?) 409 ps = ps.with_extend(EmulatePreparedStatementMethods) 410 ps.send(:emulated_prepared_statement, type, name, values) 411 else 412 sql = ps.prepared_sql 413 ps.prepared_args.freeze 414 ps.clone(:prepared_sql=>sql, :sql=>sql) 415 end 416 417 db.set_prepared_statement(name, ps) 418 ps 419 end
Set the type of SQL to use for prepared statements based on this dataset. Prepared statements default to using the same SQL type as the type that is passed to prepare/#call, but there are cases where it is helpful to use a different SQL type.
Available types are: :select, :first, :single_value, :update, :delete, :insert, :insert_select, :insert_pk
Other types are treated as :select.
# File lib/sequel/dataset/prepared_statements.rb 430 def prepare_sql_type(type) 431 clone(:prepared_sql_type => type) 432 end
Protected Instance methods
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb 438 def to_prepared_statement(type, values=nil, opts=OPTS) 439 mods = opts[:extend] || [] 440 mods += [PreparedStatementMethods] 441 442 bind. 443 clone(:prepared_statement_name=>opts[:name], :prepared_type=>type, :prepared_modify_values=>values, :orig_dataset=>self, :no_cache_sql=>true, :prepared_args=>@opts[:prepared_args]||[], :no_delayed_evaluations=>opts[:no_delayed_evaluations]). 444 with_extend(*mods) 445 end
9 - Internal Methods relating to SQL Creation
Constants
| BITWISE_METHOD_MAP | = | {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}.freeze | ||
| COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :offset, :compounds].freeze | ||
| COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, WILDCARD).as(:count) | ||
| DEFAULT | = | LiteralString.new('DEFAULT').freeze | ||
| EXISTS | = | ['EXISTS '.freeze].freeze | ||
| IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze | ||
| IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS | ||
| LIKE_OPERATORS | = | ::Sequel::SQL::ComplexExpression::LIKE_OPERATORS | ||
| N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS | ||
| QUALIFY_KEYS | = | [:select, :where, :having, :order, :group].freeze | ||
| REGEXP_OPERATORS | = | ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS | ||
| TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS | ||
| WILDCARD | = | LiteralString.new('*').freeze |
Public Class methods
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.
# File lib/sequel/dataset/sql.rb 227 def self.clause_methods(type, clauses) 228 clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze 229 end
Define a dataset literalization method for the given type in the given module, using the given clauses.
Arguments:
| mod |
Module in which to define method |
| type |
Type of |
| clauses |
array of clauses that make up the |
# File lib/sequel/dataset/sql.rb 241 def self.def_sql_method(mod, type, clauses) 242 priv = type == :update || type == :insert 243 cacheable = type == :select || type == :delete 244 245 lines = [] 246 lines << 'private' if priv 247 lines << "def #{'_' if priv}#{type}_sql" 248 lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv 249 lines << "if sql = cache_get(:_#{type}_sql); return sql end" if cacheable 250 lines << 'check_delete_allowed!' << 'check_not_limited!(:delete)' if type == :delete 251 lines << 'sql = @opts[:append_sql] || sql_string_origin' 252 253 if clauses.all?{|c| c.is_a?(Array)} 254 clauses.each do |i, cs| 255 lines << i 256 lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) 257 end 258 lines << 'end' 259 else 260 lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"}) 261 end 262 263 lines << "cache_set(:_#{type}_sql, sql) if cache_sql?" if cacheable 264 lines << 'sql' 265 lines << 'end' 266 267 mod.class_eval lines.join("\n"), __FILE__, __LINE__ 268 end
Public Instance methods
Append literalization of aliased expression to SQL string.
# File lib/sequel/dataset/sql.rb 302 def aliased_expression_sql_append(sql, ae) 303 literal_append(sql, ae.expression) 304 as_sql_append(sql, ae.alias, ae.columns) 305 end
Append literalization of array to SQL string.
# File lib/sequel/dataset/sql.rb 308 def array_sql_append(sql, a) 309 if a.empty? 310 sql << '(NULL)' 311 else 312 sql << '(' 313 expression_list_append(sql, a) 314 sql << ')' 315 end 316 end
Append literalization of boolean constant to SQL string.
# File lib/sequel/dataset/sql.rb 319 def boolean_constant_sql_append(sql, constant) 320 if (constant == true || constant == false) && !supports_where_true? 321 sql << (constant == true ? '(1 = 1)' : '(1 = 0)') 322 else 323 literal_append(sql, constant) 324 end 325 end
Append literalization of case expression to SQL string.
# File lib/sequel/dataset/sql.rb 328 def case_expression_sql_append(sql, ce) 329 sql << '(CASE' 330 if ce.expression? 331 sql << ' ' 332 literal_append(sql, ce.expression) 333 end 334 w = " WHEN " 335 t = " THEN " 336 ce.conditions.each do |c,r| 337 sql << w 338 literal_append(sql, c) 339 sql << t 340 literal_append(sql, r) 341 end 342 sql << " ELSE " 343 literal_append(sql, ce.default) 344 sql << " END)" 345 end
Append literalization of cast expression to SQL string.
# File lib/sequel/dataset/sql.rb 348 def cast_sql_append(sql, expr, type) 349 sql << 'CAST(' 350 literal_append(sql, expr) 351 sql << ' AS ' << db.cast_type_literal(type).to_s 352 sql << ')' 353 end
Append literalization of column all selection to SQL string.
# File lib/sequel/dataset/sql.rb 356 def column_all_sql_append(sql, ca) 357 qualified_identifier_sql_append(sql, ca.table, WILDCARD) 358 end
Append literalization of complex expression to SQL string.
# File lib/sequel/dataset/sql.rb 361 def complex_expression_sql_append(sql, op, args) 362 case op 363 when *IS_OPERATORS 364 r = args[1] 365 if r.nil? || supports_is_true? 366 raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] 367 sql << '(' 368 literal_append(sql, args[0]) 369 sql << ' ' << op.to_s << ' ' 370 sql << val << ')' 371 elsif op == :IS 372 complex_expression_sql_append(sql, :"=", args) 373 else 374 complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args[0], nil)]) 375 end 376 when :IN, :"NOT IN" 377 cols = args[0] 378 vals = args[1] 379 col_array = true if cols.is_a?(Array) 380 if vals.is_a?(Array) || vals.is_a?(Set) 381 val_array = true 382 empty_val_array = vals.empty? 383 end 384 if empty_val_array 385 literal_append(sql, empty_array_value(op, cols)) 386 elsif col_array 387 if !supports_multiple_column_in? 388 if val_array 389 expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) 390 literal_append(sql, op == :IN ? expr : ~expr) 391 else 392 old_vals = vals 393 vals = vals.naked if vals.is_a?(Sequel::Dataset) 394 vals = vals.to_a 395 val_cols = old_vals.columns 396 complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) 397 end 398 else 399 # If the columns and values are both arrays, use array_sql instead of 400 # literal so that if values is an array of two element arrays, it 401 # will be treated as a value list instead of a condition specifier. 402 sql << '(' 403 literal_append(sql, cols) 404 sql << ' ' << op.to_s << ' ' 405 if val_array 406 array_sql_append(sql, vals) 407 else 408 literal_append(sql, vals) 409 end 410 sql << ')' 411 end 412 else 413 sql << '(' 414 literal_append(sql, cols) 415 sql << ' ' << op.to_s << ' ' 416 literal_append(sql, vals) 417 sql << ')' 418 end 419 when :LIKE, :'NOT LIKE' 420 sql << '(' 421 literal_append(sql, args[0]) 422 sql << ' ' << op.to_s << ' ' 423 literal_append(sql, args[1]) 424 if requires_like_escape? 425 sql << " ESCAPE " 426 literal_append(sql, "\\") 427 end 428 sql << ')' 429 when :ILIKE, :'NOT ILIKE' 430 complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) 431 when :** 432 function_sql_append(sql, Sequel.function(:power, *args)) 433 when *TWO_ARITY_OPERATORS 434 if REGEXP_OPERATORS.include?(op) && !supports_regexp? 435 raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" 436 end 437 sql << '(' 438 literal_append(sql, args[0]) 439 sql << ' ' << op.to_s << ' ' 440 literal_append(sql, args[1]) 441 sql << ')' 442 when *N_ARITY_OPERATORS 443 sql << '(' 444 c = false 445 op_str = " #{op} " 446 args.each do |a| 447 sql << op_str if c 448 literal_append(sql, a) 449 c ||= true 450 end 451 sql << ')' 452 when :NOT 453 sql << 'NOT ' 454 literal_append(sql, args[0]) 455 when :NOOP 456 literal_append(sql, args[0]) 457 when :'B~' 458 sql << '~' 459 literal_append(sql, args[0]) 460 when :extract 461 sql << 'extract(' << args[0].to_s << ' FROM ' 462 literal_append(sql, args[1]) 463 sql << ')' 464 else 465 raise(InvalidOperation, "invalid operator #{op}") 466 end 467 end
Append literalization of constant to SQL string.
# File lib/sequel/dataset/sql.rb 470 def constant_sql_append(sql, constant) 471 sql << constant.to_s 472 end
Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.
# File lib/sequel/dataset/sql.rb 476 def delayed_evaluation_sql_append(sql, delay) 477 # Delayed evaluations are used specifically so the SQL 478 # can differ in subsequent calls, so we definitely don't 479 # want to cache the sql in this case. 480 disable_sql_caching! 481 482 if recorder = @opts[:placeholder_literalizer] 483 recorder.use(sql, lambda{delay.call(self)}, nil) 484 else 485 literal_append(sql, delay.call(self)) 486 end 487 end
Append literalization of function call to SQL string.
# File lib/sequel/dataset/sql.rb 490 def function_sql_append(sql, f) 491 name = f.name 492 opts = f.opts 493 494 if opts[:emulate] 495 if emulate_function?(name) 496 emulate_function_sql_append(sql, f) 497 return 498 end 499 500 name = native_function_name(name) 501 end 502 503 sql << 'LATERAL ' if opts[:lateral] 504 505 case name 506 when SQL::Identifier 507 if supports_quoted_function_names? && opts[:quoted] 508 literal_append(sql, name) 509 else 510 sql << name.value.to_s 511 end 512 when SQL::QualifiedIdentifier 513 if supports_quoted_function_names? && opts[:quoted] != false 514 literal_append(sql, name) 515 else 516 sql << split_qualifiers(name).join('.') 517 end 518 else 519 if supports_quoted_function_names? && opts[:quoted] 520 quote_identifier_append(sql, name) 521 else 522 sql << name.to_s 523 end 524 end 525 526 sql << '(' 527 if filter = opts[:filter] 528 filter = filter_expr(filter, &opts[:filter_block]) 529 end 530 if opts[:*] 531 if filter && !supports_filtered_aggregates? 532 literal_append(sql, Sequel.case({filter=>1}, nil)) 533 filter = nil 534 else 535 sql << '*' 536 end 537 else 538 sql << "DISTINCT " if opts[:distinct] 539 if filter && !supports_filtered_aggregates? 540 expression_list_append(sql, f.args.map{|arg| Sequel.case({filter=>arg}, nil)}) 541 filter = nil 542 else 543 expression_list_append(sql, f.args) 544 end 545 if order = opts[:order] 546 sql << " ORDER BY " 547 expression_list_append(sql, order) 548 end 549 end 550 sql << ')' 551 552 if group = opts[:within_group] 553 sql << " WITHIN GROUP (ORDER BY " 554 expression_list_append(sql, group) 555 sql << ')' 556 end 557 558 if filter 559 sql << " FILTER (WHERE " 560 literal_append(sql, filter) 561 sql << ')' 562 end 563 564 if window = opts[:over] 565 sql << ' OVER ' 566 window_sql_append(sql, window.opts) 567 end 568 569 if opts[:with_ordinality] 570 sql << " WITH ORDINALITY" 571 end 572 end
Append literalization of JOIN clause without ON or USING to SQL string.
# File lib/sequel/dataset/sql.rb 575 def join_clause_sql_append(sql, jc) 576 table = jc.table 577 table_alias = jc.table_alias 578 table_alias = nil if table == table_alias && !jc.column_aliases 579 sql << ' ' << join_type_sql(jc.join_type) << ' ' 580 identifier_append(sql, table) 581 as_sql_append(sql, table_alias, jc.column_aliases) if table_alias 582 end
Append literalization of JOIN ON clause to SQL string.
# File lib/sequel/dataset/sql.rb 585 def join_on_clause_sql_append(sql, jc) 586 join_clause_sql_append(sql, jc) 587 sql << ' ON ' 588 literal_append(sql, filter_expr(jc.on)) 589 end
Append literalization of JOIN USING clause to SQL string.
# File lib/sequel/dataset/sql.rb 592 def join_using_clause_sql_append(sql, jc) 593 join_clause_sql_append(sql, jc) 594 join_using_clause_using_sql_append(sql, jc.using) 595 end
Append literalization of negative boolean constant to SQL string.
# File lib/sequel/dataset/sql.rb 598 def negative_boolean_constant_sql_append(sql, constant) 599 sql << 'NOT ' 600 boolean_constant_sql_append(sql, constant) 601 end
Append literalization of ordered expression to SQL string.
# File lib/sequel/dataset/sql.rb 604 def ordered_expression_sql_append(sql, oe) 605 if emulate = requires_emulating_nulls_first? 606 case oe.nulls 607 when :first 608 null_order = 0 609 when :last 610 null_order = 2 611 end 612 613 if null_order 614 literal_append(sql, Sequel.case({{oe.expression=>nil}=>null_order}, 1)) 615 sql << ", " 616 end 617 end 618 619 literal_append(sql, oe.expression) 620 sql << (oe.descending ? ' DESC' : ' ASC') 621 622 unless emulate 623 case oe.nulls 624 when :first 625 sql << " NULLS FIRST" 626 when :last 627 sql << " NULLS LAST" 628 end 629 end 630 end
Append literalization of placeholder literal string to SQL string.
# File lib/sequel/dataset/sql.rb 633 def placeholder_literal_string_sql_append(sql, pls) 634 args = pls.args 635 str = pls.str 636 sql << '(' if pls.parens 637 if args.is_a?(Hash) 638 if args.empty? 639 sql << str 640 else 641 re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ 642 while true 643 previous, q, str = str.partition(re) 644 sql << previous 645 literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? 646 break if str.empty? 647 end 648 end 649 elsif str.is_a?(Array) 650 len = args.length 651 str.each_with_index do |s, i| 652 sql << s 653 literal_append(sql, args[i]) unless i == len 654 end 655 unless str.length == args.length || str.length == args.length + 1 656 raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" 657 end 658 else 659 i = -1 660 match_len = args.length - 1 661 while true 662 previous, q, str = str.partition('?') 663 sql << previous 664 literal_append(sql, args.at(i+=1)) unless q.empty? 665 if str.empty? 666 unless i == match_len 667 raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder string" 668 end 669 break 670 end 671 end 672 end 673 sql << ')' if pls.parens 674 end
Append literalization of qualified identifier to SQL string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
# File lib/sequel/dataset/sql.rb 679 def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) 680 identifier_append(sql, table) 681 sql << '.' 682 identifier_append(sql, column) 683 end
Append literalization of unqualified identifier to SQL string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb 689 def quote_identifier_append(sql, name) 690 name = name.value if name.is_a?(SQL::Identifier) 691 if name.is_a?(LiteralString) 692 sql << name 693 else 694 name = input_identifier(name) 695 if quote_identifiers? 696 quoted_identifier_append(sql, name) 697 else 698 sql << name 699 end 700 end 701 end
Append literalization of identifier or unqualified identifier to SQL string.
# File lib/sequel/dataset/sql.rb 704 def quote_schema_table_append(sql, table) 705 qualifiers = split_qualifiers(table) 706 table = qualifiers.pop 707 708 qualifiers.each do |q| 709 quote_identifier_append(sql, q) 710 sql << '.' 711 end 712 713 quote_identifier_append(sql, table) 714 end
Append literalization of quoted identifier to SQL string. This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb 720 def quoted_identifier_append(sql, name) 721 sql << '"' << name.to_s.gsub('"', '""') << '"' 722 end
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).
# File lib/sequel/dataset/sql.rb 731 def schema_and_table(table_name, sch=nil) 732 sch = sch.to_s if sch 733 case table_name 734 when Symbol 735 s, t, _ = split_symbol(table_name) 736 [s||sch, t] 737 when SQL::QualifiedIdentifier 738 [table_name.table.to_s, table_name.column.to_s] 739 when SQL::Identifier 740 [sch, table_name.value.to_s] 741 when String 742 [sch, table_name] 743 else 744 raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 745 end 746 end
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s'] ds.split_qualifiers(Sequel[:t][:s]) # ['t', 's'] ds.split_qualifiers(Sequel[:d][:t][:s]) # ['d', 't', 's'] ds.split_qualifiers(Sequel.qualify(Sequel[:h][:d], Sequel[:t][:s])) # ['h', 'd', 't', 's']
# File lib/sequel/dataset/sql.rb 754 def split_qualifiers(table_name, *args) 755 case table_name 756 when SQL::QualifiedIdentifier 757 split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) 758 else 759 sch, table = schema_and_table(table_name, *args) 760 sch ? [sch, table] : [table] 761 end 762 end
# File lib/sequel/dataset/sql.rb 765 def subscript_sql_append(sql, s) 766 case s.expression 767 when Symbol, SQL::Subscript, SQL::Identifier, SQL::QualifiedIdentifier 768 # nothing 769 else 770 wrap_expression = true 771 sql << '(' 772 end 773 literal_append(sql, s.expression) 774 if wrap_expression 775 sql << ')[' 776 else 777 sql << '[' 778 end 779 sub = s.sub 780 if sub.length == 1 && (range = sub.first).is_a?(Range) 781 literal_append(sql, range.begin) 782 sql << ':' 783 e = range.end 784 e -= 1 if range.exclude_end? && e.is_a?(Integer) 785 literal_append(sql, e) 786 else 787 expression_list_append(sql, s.sub) 788 end 789 sql << ']' 790 end
Append literalization of windows (for window functions) to SQL string.
# File lib/sequel/dataset/sql.rb 793 def window_sql_append(sql, opts) 794 raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 795 space = false 796 space_s = ' ' 797 798 sql << '(' 799 800 if window = opts[:window] 801 literal_append(sql, window) 802 space = true 803 end 804 805 if part = opts[:partition] 806 sql << space_s if space 807 sql << "PARTITION BY " 808 expression_list_append(sql, Array(part)) 809 space = true 810 end 811 812 if order = opts[:order] 813 sql << space_s if space 814 sql << "ORDER BY " 815 expression_list_append(sql, Array(order)) 816 space = true 817 end 818 819 if frame = opts[:frame] 820 sql << space_s if space 821 822 if frame.is_a?(String) 823 sql << frame 824 else 825 case frame 826 when :all 827 frame_type = :rows 828 frame_start = :preceding 829 frame_end = :following 830 when :rows, :range, :groups 831 frame_type = frame 832 frame_start = :preceding 833 frame_end = :current 834 when Hash 835 frame_type = frame[:type] 836 unless frame_type == :rows || frame_type == :range || frame_type == :groups 837 raise Error, "invalid window :frame :type option: #{frame_type.inspect}" 838 end 839 unless frame_start = frame[:start] 840 raise Error, "invalid window :frame :start option: #{frame_start.inspect}" 841 end 842 frame_end = frame[:end] 843 frame_exclude = frame[:exclude] 844 else 845 raise Error, "invalid window :frame option: #{frame.inspect}" 846 end 847 848 sql << frame_type.to_s.upcase << " " 849 sql << 'BETWEEN ' if frame_end 850 window_frame_boundary_sql_append(sql, frame_start, :preceding) 851 if frame_end 852 sql << " AND " 853 window_frame_boundary_sql_append(sql, frame_end, :following) 854 end 855 856 if frame_exclude 857 sql << " EXCLUDE " 858 859 case frame_exclude 860 when :current 861 sql << "CURRENT ROW" 862 when :group 863 sql << "GROUP" 864 when :ties 865 sql << "TIES" 866 when :no_others 867 sql << "NO OTHERS" 868 else 869 raise Error, "invalid window :frame :exclude option: #{frame_exclude.inspect}" 870 end 871 end 872 end 873 end 874 875 sql << ')' 876 end
Protected Instance methods
Return a from_self dataset if an order or limit is specified, so it works as expected with UNION, EXCEPT, and INTERSECT clauses.
# File lib/sequel/dataset/sql.rb 882 def compound_from_self 883 (@opts[:sql] || @opts[:limit] || @opts[:order] || @opts[:offset]) ? from_self : self 884 end