Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Extract, transform, load
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Phases == {{Unreferenced section|date=September 2024}} === Extract === ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/or [[File format|format]]. Common data-source formats include [[relational database]]s, [[flat-file database]]s, [[XML]], and [[JSON]], but may also include non-relational database structures such as [[IBM Information Management System]] or other data structures such as [[Virtual Storage Access Method|Virtual Storage Access Method (VSAM)]] or [[ISAM|Indexed Sequential Access Method (ISAM)]], or even formats fetched from outside sources by means such as a [[web crawler]] or [[data scraping]]. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required. An intrinsic part of the extraction involves data validation to confirm whether the data pulled from the sources has the correct/expected values in a given domain (such as a pattern/default or list of values). If the data fails the validation rules, it is rejected entirely or in part. The rejected data is ideally reported back to the source system for further analysis to identify and to rectify incorrect records or perform [[data wrangling]]. === Transform === In the [[data transformation]] stage, a series of rules or functions are applied to the extracted data in order to prepare it for loading into the end target. An important function of transformation is [[data cleansing]], which aims to pass only "proper" data to the target. The challenge when different systems interact is in the relevant systems' interfacing and communicating. Character sets that may be available in one system may not be in others. In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse: * Selecting only certain columns to load: (or selecting [[null (SQL)|null]] columns not to load). For example, if the source data has three columns (aka "attributes"), roll_no, age, and salary, then the selection may take only roll_no and salary. Or, the selection mechanism may ignore all those records where salary is not present (salary = null). * Translating coded values: (''e.g.'', if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") * Encoding free-form values: (''e.g.'', mapping "Male" to "M") * Deriving a new calculated value: (''e.g.'', sale_amount = qty * unit_price) * Sorting or ordering the data based on a list of columns to improve search performance * [[Join (relational algebra)#Joins and join-like operators|Join]]ing data from multiple sources (''e.g.'', lookup, merge) and [[Record linkage|deduplicating]] the data * Aggregating (for example, rollup β summarizing multiple rows of data β total sales for each store, and for each region, etc.) * Generating [[surrogate key|surrogate-key]] values * [[Transpose|Transposing]] or [[Pivot table|pivoting]] (turning multiple columns into multiple rows or vice versa) * Splitting a column into multiple columns (''e.g.'', converting a [[comma separated values|comma-separated list]], specified as a string in one column, into individual values in different columns) * Disaggregating repeating columns * Looking up and validating the relevant data from tables or referential files * Applying any form of data validation; failed validation may result in a full rejection of the data, partial rejection, or no rejection at all, and thus none, some, or all of the data is handed over to the next step depending on the rule design and exception handling; many of the above transformations may result in exceptions, e.g., when a code translation parses an unknown code in the extracted data === Load === The load phase loads the data into the end target, which can be any data store including a simple delimited flat file or a [[data warehouse]]. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in a historical form at regular intervals β for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the [[business]] needs. <!-- is this part of "load"? -->More complex systems can maintain a history and [[audit trail]] of all changes to the data loaded in the data warehouse. As the load phase interacts with a database, the constraints defined in the database schema β as well as in triggers activated upon data load β apply (for example, uniqueness, [[referential integrity]], mandatory fields), which also contribute to the overall data quality performance of the ETL process. * For example, a financial institution might have information on a customer in several departments and each department might have that customer's information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all of these data elements and consolidate them into a uniform presentation, such as for storing in a database or data warehouse. * Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use. * An example would be an [[expense and cost recovery system]] such as used by [[Accounting|accountants]], [[consultant]]s, and [[law firm]]s. The data usually ends up in the [[Law practice management software|time and billing system]], although some businesses may also utilize the raw data for employee productivity reports to Human Resources (personnel dept.) or equipment usage reports to Facilities Management. === Additional phases === A real-life ETL cycle may consist of additional execution steps, for example: # Cycle initiation # Build [[reference data]] # Extract (from sources) # [[data validation|Validate]] # Transform ([[data cleaning|clean]], apply [[business rule]]s, check for [[data integrity]], create [[Aggregate (data warehouse)|aggregates]] or disaggregates) # Stage (load into [[staging (data)|staging]] tables, if used) # [[Audit report]]s (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair) # Publish (to target tables) # [[Archiving|Archive]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)