1. Demand Analysis Phase Data Flow Chart expresses the relationship between data and processing processes. The data in the system is described by means of the Data Dictionary, referred to as DD). The data dictionary is a collection of various types of data descriptions, which is about data in the database, namely metadata, not the data itself. Data dictionaries typically include five parts of data items, data structures, data streams, data storage, and processing (at least the data type of each field) and the primary keys within each table). Data item description = {data item name, data item meaning description, alias, data type, length, value range, value meanings, logical relationship with other data items} Data structure description = {data structure name, meaning description, composition : {Data item or data structure}} Data flow description = {data stream name, description, data stream source, data stream go, composition: {data structure}, average traffic, peak flow} Data Store Description = {Data Storage Name , Illustrated, numbered, inflow data stream, outflow data stream, composition: {data structure}, data amount, access method} process description = {processing process name, description, input: {data stream}, output: { Data flow}, processing: {Brief Description}} 2. Concept Structure Design Phase By integrating user needs, forming a conceptual model independent of concrete DBMS, can be represented by ER diagram. The steps to create er (entity relationship) are as follows: 2. 1. The task of the initialization project This stage is to start from the purpose description and range description, determine modeling target, development modeling plan, organize modeling team, Collect source materials, formulate constraints and specifications. Collection source materials are the focus of this phase. Basic data sheets were formed by investigating and observing results, business processes, input and output of original systems, various reports, collected raw data. 2.2 First Step - Define entity entity set members have a common feature and property set, which can directly or indirectly identify most of the entities directly or indirectly in the collected source material - basic data sheet. According to the terms of the source material name table, the term "code" ends, such as customer code, agent code, product code, etc., to initially identify the potential entity, resulting in a potential entity, forming a preliminary Entity table. 2.3 Step 2 - Define Dual Contacts in the contact IDEF1X model, N-yuan contact must be defined as n binary contacts. According to the actual business needs and rules, use the entity contact matrix to identify the binary relationship between the entity, and then determine the potential, relationship name, and instructions of the connection relationship according to the actual situation, determine the type of relationship, the identification relationship, non-identification relationship (forced Or optional) or non-deterministic relationships, classification relationships. If each instance of the sub-entity needs to be identified by the relationship between the parent entity, it is an identification relationship, otherwise it is a non-identification relationship. In the non-identification relationship, if the instance of each sub-entity is associated with a parent entity, it is forced, otherwise it is not forced. If the parent entity represents the same real object as the sub-entity, then they are classified.
2.4 Step 3 - Defining Codes by introducing a cross entity to remove a non-deterministic relationship generated by the previous phase, then start identifying the Hou Selection Code attribute from the non-cross-entity and independent entity to uniquely identify the instance of each entity, and then from the Hou Selection Keep the master code in the code. In order to determine the validity of the main code and relationship, the non-air rules and non-multi-value rules are guaranteed, that is, one attribute of an entity instance cannot be null, and there is no one or more values at the same time. Find a misunderstanding determination relationship, further decompose the entity, and finally construct a key-based view of the IDEF1x model (KB]. 2.5 Fourth Step - Define Properties Developing a property sheet from the source data sheet to develop a property sheet to determine the owner of the property. Define non-primary code properties, check for non-empty and non-multileble rules for properties. In addition, check the full dependent function rules and non-delivery dependence rules, ensuring that a non-correct attribute must depend on the master code, the entire master code, just the master code. A full property view of the improved IDEF1X model that is at least in accordance with the third paradigm of the relationship theory is obtained. 2.6 Step 5 - Define the data type, length, precision, non-empty, default, constraint rules, etc. of other objects and rules define attributes. Define object information such as trigger, stored procedures, views, roles, synonyms, sequences. 3. The logical structure design phase converts the concept structure into a data model (e.g., a relational model) supported by a DBMS, and optimizes it. The design logic structure should be selected that is best suited to describe the data model that express the corresponding concept structure, and then selects the most suitable DBMS. 4. Database Physical Design Phase Select a physical structure (including storage structure and access method) that is best suited for the application environment for the logical data model. According to the DBMS features and the needs of the processing, the physical storage arrangement, design index, and form a database in the database. 5. Database Implementation Phase The data language (e.g., SQL) provided by DBMS and its host language (e.g., c), establishes a database, preparing a database, organizing data, and invoking a commissioning according to the results of logic design and physical design. . The database implementation mainly includes the following work: Define database structure with DDL, organize data storage, compile with debugging applications, database operations 6. Database operation and maintenance phase database application system will be officially run after the trial operation. Evaluate, adjust and modify them during the database system operation. Including: Database dump and recovery, database security, integrity control, supervision, analysis and improvement of database performance, database reorganization and reconstruction. Modeling tools are used to speed up the database design speed. There are currently many database accessories (Case tools), such as Rational Rational Rose, CA's Erwin and BPWIN, Sybase's PowerDesigner, and Oracle's Oracle Designer, etc. Index usage Principles: Index is one of the most efficient ways to get data from the database. 95% of database performance issues can be resolved using indexing techniques. 1) The logical primary key uses the unique group index, and the unique non-group index is used to the system key (as a stored procedure), which uses a non-group index for any foreign key column. How much space considering the database, how to access, and whether these access is mainly used as read and write. 2) Most databases index automatically created primary key fields, but don't forget the index foreign key, they are often used frequently, such as running a query showing a record of the primary table and all associated formats. 3) Do not index the MEMO / Note field, do not index large fields (there are many characters), so that the index will take much storage space. 4) Do not index the commonly used small tables Do not set any keys for small data tables, if they often have such a plug-in and delete operations.
Index maintenance for these insertions and deletions may consume more time than scanning tables. 4. Data Integrity Design (Database Logic Design) 1) Integrity Implementation Mechanism: Entity Integrity: Primary Key Reference Integrity: Parent Table Remove Data: Cascaded Delete; Limited Delete; Set Null Parent Table Insert data: Restricted insertion; recursive insertion parent table update data: cascaded update; restricted update; set null DBMS There are two ways to implement: Foreign key implementation mechanism (constraint rules) and trigger implementation mechanism user definition Integrity: NOT NULL; Check; Trigger 2) Force data integrity with a constraint rather than business rules to achieve data integrity of data using database systems. This includes not only the integrity of standardization and also includes functionality of data. You can also increase the trigger to ensure the correctness of the data when writing data. Do not rely on the business layer to ensure data integrity; it does not guarantee the integrity of the table (foreign bond), so it is impossible to impose on other integrity rules. 3) Forced instruction integrity to remove it before harmful data enters the database. Activate the integrity features of the database system. This allows the data to be cleaned to force the developer to put more time handling error conditions. 4) The best way to use the search to control data integrity control data integrity is to limit the user's choice. As long as it is possible to provide a clear value list for users to choose from. This will reduce the incorrect and mishand of the type of code to provide data consistency. Some public data is especially suitable for finding: national code, status code, etc. 5) Adopt the view in order to provide another layer of abstraction between the database and the application code, you can establish a special view for the application without having to access the data table directly. This is also equal to providing more freedom when processing database changes. 5. Other Design Skills 1) Avoid using trigger triggers can usually be implemented in other ways. The trigger may become interference when debugging the program. If you really need a trigger, you'd better focus on its documentation. 2) Use common English (or any other language) instead of using the encoding to create a drop-down menu, a list, and a list of English names. If you need to encode, you can attach the user known by the user. 3) Save the usual information to make a table to store general database information is very useful. Store the current version of the database in this table, recently check / fix (for access), the name of the design document, and the information, etc. This enables a simple mechanism to track the database. When the customer complains that their database does not meet the desire to meet your requirements, this is especially useful for non-client / server environment. 4) Containing version mechanisms to introduce version control mechanisms in the database to determine the version of the database in use. For a long time, the needs of users will always change. It may eventually be required to modify the database structure. It is more convenient to store version information directly into the database. 5) Compile documents to prepare documents for all shortcuts, naming specifications, restrictions, and functions. Use database tools that are added to tables, columns, triggers. Very useful for development, support and tracking modifications. Database documentation, or build a document inside or separately in the database itself. In this way, after a year, I will return to the 2nd version after more than a year, and the chance to make mistakes will be greatly reduced. 6) After testing, testing, repeated testing or revising the database, you must use the data field of the user to enter the data field. Most importantly, let users test and ensure that the selected data type meets business requirements. Test needs to be completed before putting the new database into actual services. 7) Checking the design of the database designed during development is the application prototype inspection database through its supported application prototype. In other words, for each final expressed data, you can check that you check the data model and see how to remove the data.