Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Extract, transform, load
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Design Challenges == ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems. === Data variations === The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified. [[Data profiling]] of a source during data analysis can identify the data conditions that must be managed by transform rules specifications, leading to an amendment of validation rules explicitly and implicitly implemented in the ETL process. Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment. Design analysis<ref>{{Cite journal|last=Theodorou|first=Vasileios|date=2017|title=Frequent patterns in ETL workflows: An empirical approach|journal=Data & Knowledge Engineering|volume=112|pages=1β16|doi=10.1016/j.datak.2017.08.004|hdl=2117/110172|hdl-access=free}}</ref> should establish the [[scalability]] of an ETL system across the lifetime of its usage β including understanding the volumes of data that must be processed within [[service level agreement]]s. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily [[batch processing|batch]] to multiple-day micro batch to integration with [[message queue]]s or real-time change-data-capture for continuous transformation and update. === Uniqueness of keys === [[Unique key]]s play an important part in all relational databases, as they tie everything together. A unique key is a column that identifies a given entity, whereas a [[foreign key]] is a column in another table that refers to a primary key. Keys can comprise several columns, in which case they are composite keys. In many cases, the primary key is an auto-generated integer that has no meaning for the [[Business entity (computer science)|business entity]] being represented, but solely exists for the purpose of the relational database β commonly referred to as a [[surrogate key]]. As there is usually more than one data source getting loaded into the warehouse, the keys are an important concern to be addressed. For example: customers might be represented in several data sources, with their [[Social Security number]] as the primary key in one source, their phone number in another, and a surrogate in the third. Yet a data warehouse may require the consolidation of all the customer information into one [[Dimension (data warehouse)|dimension]]. A recommended way to deal with the concern involves adding a warehouse surrogate key, which is used as a foreign key from the fact table.<ref>Kimball, The Data Warehouse Lifecycle Toolkit, p. 332</ref> Usually, updates occur to a dimension's source data, which obviously must be reflected in the data warehouse. If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports; it is done by creating a [[lookup table]] that contains the warehouse surrogate key and the originating key.<ref name="Rizzi, Data Warehouse Design p. 291">Golfarelli/Rizzi, Data Warehouse Design, p. 291</ref> This way, the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved. The lookup table is used in different ways depending on the nature of the source data. There are 5 types to consider;<ref name="Rizzi, Data Warehouse Design p. 291"/> three are included here: ;Type 1 :The dimension row is simply updated to match the current state of the source system; the warehouse does not capture history; the lookup table is used to identify the dimension row to update or overwrite ;Type 2 :A new dimension row is added with the new state of the source system; a new surrogate key is assigned; source key is no longer unique in the lookup table ;Fully logged :A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and time of deactivation. === Performance === {{Unreferenced section|date=September 2024}} ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and much memory. In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ: * ''Direct path extract'' method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high-speed extract * Most of the transformation processing outside of the database * Bulk load operations whenever possible Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are: * [[partition (database)|Partition]] tables (and indices): try to keep partitions similar in size (watch for <code>null</code> values that can skew the partitioning) * Do all validation in the ETL layer before the load: disable [[data integrity|integrity]] checking (<code>disable constraint</code> ...) in the target database tables during the load * Disable [[database trigger|triggers]] (<code>disable trigger</code> ...) in the target database tables during the load: simulate their effect as a separate step * Generate IDs in the ETL layer (not in the database) * Drop the [[database index|indices]] (on a table or partition) before the load β and recreate them after the load (SQL: <code>drop index</code> ...<code>; create index</code> ...) * Use parallel bulk load when possible β works well when the table is partitioned or there are no indices (Note: attempting to do parallel loads into the same table (partition) usually causes locks β if not on the data rows, then on indices) * If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately; you often can do bulk load for inserts, but updates and deletes commonly go through an [[API]] (using [[SQL]]) Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using <code>distinct</code> may be slow in the database; thus, it makes sense to do it outside. On the other side, if using <code>distinct</code> significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data. A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of [[parallel computing|parallelism]], and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and their indices can really help. Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases β it can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers: * Sources * Central ETL layer * Targets This approach allows processing to take maximum advantage of parallelism. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into the first β and then replicating into the second). Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main [[Fact table|"fact" tables]]. === Parallel computing === {{Unreferenced section|date=September 2024}} Some ETL software implementations include [[Parallel computing|parallel processing]]. This enables a number of methods to improve overall performance of ETL when dealing with large volumes of data. ETL applications implement three main types of parallelism: * Data: By splitting a single sequential file into smaller data files to provide [[Parallel Random Access Machine|parallel access]] * [[pipeline (computing)|Pipeline]]: allowing the simultaneous running of several components on the same [[data stream]], e.g. looking up a value on record 1 at the same time as adding two fields on record 2 * Component: The simultaneous running of multiple [[process (computing)|processes]] on different data streams in the same job, e.g. sorting one input file while removing duplicates on another file All three types of parallelism usually operate combined in a single job or task. An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary. === Failure recovery === {{Unreferenced section|date=September 2024}} Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs help to roll back and rerun the failed piece. Best practice also calls for ''checkpoints'', which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, etc.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)