In relational database management systems (RDBMSs), user and system data is stored in tables. Each table consists of a set of rows that describe entities and a set of columns that hold the attributes of an entity.
For example, a Customer table would have columns such as CustomerName and CreditLimit and a row for each customer. In Microsoft SQL Server data management software, tables are contained within schemas that are very similar in concept to folders that contain files in the operating system. Designing tables is often one of the most important roles that a database developer undertakes because incorrect table design leads to the inability to query the data efficiently. After an appropriate design has been created, it is then important to know how to correctly implement the design.
The most basic types of data that get stored in database systems are numbers, dates, and strings. There is a range of data types that can be used for each of these. For finding out more about data types details, read this blog.
Data types are a form of constraint (type and range) that is placed on the values that can be stored in a location. For example, if you choose a smallint (numeric) data type, you will not be able to store text, and also a number out of smallint’s range in the location. The data type also determines which sorts of operations are permitted on that data and how those operations work. Choosing an appropriate data type provides a level of self-documentation. If all values were stored in a string value (which could potentially represent any type of value) or XML data types, you would probably need to store documentation about what sort of values can be stored in the string locations.
The best practices for table and column design are often represented by a set of rules that are known as “normalization” rules. Normalization is a systematic process that is used to improve the design of databases. Not all databases should be normalized. It is common to intentionally denormalize databases for performance reasons or for ease of end-user analysis. For example, dimensional models that are widely used in data warehouses (such as the data warehouses that are commonly used with SQL Server Analysis Services) are intentionally designed not to be normalized. Tables might also be denormalized to avoid the need for time-consuming calculations or to minimize physical database design constraints such as locking. There are six tiers of normalization, but if you do it in third tier will be sufficient.
- First Normal Form: Create a separate table for each set of related data. Identify each set of related data by using a primary key. Column values should not include repeating groups. Duplicate rows should not exist in tables. You can use unique keys to avoid having duplicate rows.
- Second Normal Form: Create separate tables for sets of values that apply to multiple records. Relate these tables by using a foreign key.
- Third Normal Form: Eliminate fields that do not depend on the key.
A primary key is a form of constraint that is applied to a table. A candidate key is used to identify a column or set of columns that can be used to uniquely identify a row. A primary key is chosen from any potential candidate keys. A primary key must be unique and cannot be NULL.
A foreign key is used to establish references or relationships between tables. It is a requirement to hold the details of the primary key (or another unique key) from one table as a column in another table. In SQL Server, the reference is only checked if the column that holds the foreign key value is not NULL. Tables might also include multiple foreign key references. Foreign keys are referred to as being used to “enforce referential integrity.” Foreign keys are a form of constraint.
Schemas are used as containers for objects such as tables, views, and stored procedures. Schemas can be particularly helpful in providing a level of organization and structure when large numbers of objects are present in a database. It is also possible to assign security permissions at the schema level rather than individually on the objects that are contained within the schemas. Schemas form a part of the multipart naming convention for objects. In SQL Server, an object is formally referred to by a name of the form Server.Database.Schema.Object.
It is important to include schema names when referring to objects instead of depending upon schema name resolution. Apart from rare situations, using multipart names leads to more reliable code that does not depend upon default schema settings.
When you create tables by using the CREATE TABLE statement, make sure that you supply both a schema name and a table name. If the schema name is not specified, the table will be created in the default schema of the user who is executing the statement. This could lead to the creation of scripts that are not robust because they could generate different schema designs when different users execute them. You should specify NULL or NOT NULL for each column in the table. You can specify a primary key constraint beside the name of a column if only a single column is included in the key. It must be included after the list of columns when more than one column is included in the key.
The Database Engine can be installed as case sensitive or non-case sensitive. If the Database Engine is installed as case sensitive, object names must always have the same case. For example, a table named OrderData is a different table from a table named ORDERDATA. If the Database Engine is installed as non-case sensitive, those two table names are considered to be the same table, and that name can only be used one time.[/info]
CREATE TABLE HR.Employees_Personal_Information ( EmployeeID INT IDENTITY(1,1) PRIMARY KEY not null , FirstName NVARCHAR(20) not null , LastName NVARCHAR(40) not null , ICNumber INT not null , DateOfBirth DATE not null , YearOfBirth AS YEAR(DateOfBirth) Persisted ) CREATE TABLE HR.Employees_Contact_Information ( EmployeeID INT FOREIGN KEY REFERENCES HR.Employees_Personal_Information (EmployeeID) not null , FirstName NVARCHAR(20) not null , LastName NVARCHAR(40) not null , Address1 NVARCHAR(100) , Address2 NVARCHAR(100) , PhoneNumber INT )
The DROP TABLE statement is used to drop tables from a database. If a table is referenced by a foreign key constraint, it cannot be dropped. When dropping a table, all permissions, constraints, indexes, and triggers that are related to the table are also dropped. Deletion is permanent.
DROP TABLE HR.Employees_Personal_Information
Altering a table is useful because permissions on the table are retained along with the data in the table. If you drop and re-create the table with a new definition, both the permissions on the table and the data in the table are lost. If the table is referenced by a foreign key, it cannot be dropped. However, it can be altered. Tables are modified by using the ALTER TABLE statement. You can use this statement to add or drop columns and constraints or to enable or disable constraints and triggers.
ALTER TABLE HR.Employees_Contact_Information DROP COLUMN Address2
Computed columns are columns that are derived from other columns or from the result of executing functions. A nonpersisted computed column is calculated every time a SELECT operation occurs on the column and it does not consume space on disk. A persisted computed column is calculated when the data in the row is inserted or updated and does consume space on the disk. The data in the column is then selected like the data in any other column. The core difference between persisted and nonpersisted computed columns relates to when the computational performance impact is exerted. Nonpersisted computed columns work best for data that is modified regularly, but selected rarely. Persisted computed columns work best for data that is modified rarely, but selected regularly. In most business systems, data is read much more regularly than it is updated. For this reason, most computed columns would perform best as persisted computed columns.
I hope this blog post was informative for you.