Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Database design
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Normalization === {{Main|Database normalization}} In the field of [[relational database]] design, ''normalization'' is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies that could lead to loss of [[data integrity]]. A standard piece of database design guidance is that the designer should create a fully normalized design; selective [[denormalization]] can subsequently be performed, but only for [[computer performance|performance]] reasons. The trade-off is storage space vs performance. The more normalized the design is, the less data redundancy there is (and therefore, it takes up less space to store), however, common data retrieval patterns may now need complex joins, merges, and sorts to occur – which takes up more data read, and compute cycles. Some modeling disciplines, such as the [[dimensional modeling]] approach to [[data warehouse]] design, explicitly recommend non-normalized designs, i.e. designs that in large part do not adhere to [[Third normal form|3NF]]. Normalization consists of normal forms that are [[First normal form|1NF]], [[Second normal form|2NF]], 3NF, [[Boyce–Codd normal form|Boyce-Codd NF (3.5NF)]], [[Fourth normal form|4NF]], [[Fifth normal form|5NF]] and [[Sixth normal form|6NF]]. Document databases take a different approach. A document that is stored in such a database, typically would contain more than one normalized data unit and often the relationships between the units as well. If all the data units and the relationships in question are often retrieved together, then this approach optimizes the number of retrieves. It also simplifies how data gets replicated, because now there is a clearly identifiable unit of data whose consistency is self-contained. Another consideration is that reading and writing a single document in such databases will require a single transaction – which can be an important consideration in a [[Microservices]] architecture. In such situations, often, portions of the document are retrieved from other services via an API and stored locally for efficiency reasons. If the data units were to be split out across the services, then a read (or write) to support a service consumer might require more than one service calls, and this could result in management of multiple transactions, which may not be preferred.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)