SQL Story excerpts (1) ---- Simple query

zhaozj2021-02-11  195

When I was back on 9CBS, I always advised the friends upstairs to connect the friends upstairs. The response is very small. Often a simple feature, it must be written into a child inquiry or a cursor, making very complex sharp length. Indeed, this way to write for beginners, the labor is not expensive, and the ideas is better understood. So it is often scored. But in fact, if you really familiarize with SQL programming style, you will understand that the joint query is the most direct, clearer, most powerful method, and the better way is that there is no stroke, a simple query end fight . Let me take a few examples to prove this view. Example 1-1, the query and processing of repeated records There are always some friends asked online, a table, have a repetitive record, what should I do? Of course, a good design style has a good relational database, each table should have a primary key, there is a unique index, so it should not have repeated records. However, sometimes there should be things that should not have anything, such as "seven-seven things", such as "9.11" ... cough, in fact, I want to say, sometimes someone will have no database, he doesn't know What is the primary key, or an automatically identified ID column charge (in fact, this is nothing, no one is born to design the database, the key is to recognize his shortcomings and improve). More common is that our data may come from some spreadsheets or text files to find problems when importing into the database. Here, we establish a table that represents a store in a store. I intend to join any index and constraints, so it will be easily problem (just like the nude mice in the laboratory). Create Table Product (ID INT, PNAME CHAR (20), Price Money, Number Int, PDESCRIPTION VARCHAR (50)) Now we can insert some data inserted into: IDPNamePricenumberpdescription1Apple 123000

1Apple 123000

2banaana 16.997600

3olive 25.224500

4ORANGE 15.995500

4COCO NUT 40.992000

5PineApple 302500

6olive 25.223000

There are some obvious problems here. The first two lines are exactly the same, such repetitive data is not meaningful, will only add chaos. Interbase is also better, you can modify them directly in its ibconsole. In SQL Server, the system cannot distinguish between two rows at all, when we try to modify any of the rows of rows, it will receive an error message. In fact, this is also a relationship database. Then what should we do? In fact, how to deal with it is simpler than identifying the error data, the join query is not used. Use a SQL statement Select Distinct * from product to compress the duplicate data and generate a data set including normal data. The results are as follows: IDPNamePricenumberpdescription1Apple 123000

2banaana 16.997600

3olive 25.224500

4ORANGE 15.995500

4COCO NUT 40.992000

5PineApple 302500

6olive 25.223000

For a database that supports Select ... Into ... from statement, such a Select Distinct * Into NewTable from Product imports data into a new table. Or you can use Inert Into ... Select Distinct * from ... import it into an existing table. In short, there is a correct data set, and how to deal with it. I believe that after you know the keyword Distinct of the merged data, you will never use the cursor to handle the duplicate data. This is the first step, sometimes we don't want to compress them, but I want to see who I have out. Ok, use the following statement to find a repetitive record, the rightmost "row_count" means the number of times the line data is repeated in the table: SELECT ID, PNAME, PRICE, NUMBER, PDESCRIPTION, Count (*) Row_Count from ProductGroup by ID, PNAME, PRICE, nUMBER, PDESCRIPTIONHAVING cOUNT (*)> 1IDPNAMEPRICENUMBERPDESCRIPTIONROW_COUNT 1Apple123000NULL2 (number of rows affected is 1 row) is actually a simple to use keywords GROUP bY ...... HAVING cOUNT and statistical functions, and remember to write in the back GROUP bY The complete list of fields. This means that we are in full consistent data, each field. When the data in the Product table is much larger, the correct data set efficiency is directly generated by the front method. Now there is this result set, we can work efficiently. Now, we use SELECT ID, PNAME, PRICE, NUMBER, PDESCRIPTIONFROM ProductGroup BY ID, PDESCRIPTIONFROM ProductGroup By ID, PNAME, PRICE, NUMBER, PDESCRIPTIONHAVING COUNT (*)> 1 to generate repetitive data generated into a compressed correct data set, exported with the foregoing method In a temporary table, then delete the duplicate data from the Product table by delete from product where id, pname, price, number, pdescriptionhaving count (*)> 1) Insert Product. Now there is no longer full repetition and unrecognized data in the Product table.

转载请注明原文地址:https://www.9cbs.com/read-4595.html

New Post(0)