The sound of a large amount of data set is displayed on Delphi on 9cbs.

zhaozj2021-02-16  89

Here is some of the records that I have sorted from Search.9cbs.net search "10,000 records Delphi"

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━http: //search.9cbs.net/expert/topic/865/865457.xml? Temp = .976528313825075556 () Reputation: 77? 2002-07-12 19: 07: 11Z? Score: 5

?? ADO and BDE are different, BDE will not put all the results set by default, even if you choose the SELECT10000 record, it will not put 10000 records all down, but how much DOWN is used. ADO is different. If Cursortype is set to CTSTATIC, it will save all the result sets in memory, which is of course spent some time, but should not use the ADO or use BDE to record the SELECT1000 records, this shows you There is a problem with application design. It is recommended to set the following: cachesize: = 1000; CursorLocation: = CLUSECLIENT; CURSORTYPE: = CTSTATIC; LOCKTYPE: = LTBATCHOPTIMISTISTISTISTISTISTISTICSTICSTICSTICSTICSTICSTICSTICSTICSTICSTICSTICSTICTICSTICSTISTICSTISTICSTISTICSTISTICSTISTICSTISTICSTISTICSTISTICSTISTIC ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━h━━━━━━━━━━━hh━━━━━━━━━━━━━//━/━??? Temp = .6567499seekuface (seekuface) () Reputation: 100? 2002-03-18 20: 07: 43z? Score: 3

?? The key is that the data retrieved is small, and the large amount of data cannot be retrieved once, which will cause the program's low speed and inefficient, modify the default parameters of some controls of the Delphi to correct this defect. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━? Http://search.9cbs.net/expert/topic/323/323062.xml?temp=.1287195? Newyj (Wu Gang vs Si Sid) () Reputation: 107? 2001-10-15 08:58 : 54Z? Score: 10? How many ways to improve database access performance; to see some databases of books, more common methods 1. Construction index 2, split sheet lateral and vertical direction 3. Upgrade hardware query data within a certain range with TadoQuery

However, 10,000 data is not a lot; 7, 8 seconds are really too long; it may be related to your usage machine configuration, ━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━h━━━━━━?????? 0023062.XML? Temp = .1287195? FQ_FQ (empty) () reputation: 100? 2001-10-17 16: 38: 41z? Score: 20 ??? Acceleration speed method is very simple, first, I suggest you use TadoQuery components, then Let the component's cachesize: = 1000, the default is 1 (the least election), then set your cursor position to the client, then you are your cursor type, the only guarded cursor is the fastest But you have to look at your use type. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━? Http://search.9cbs.net/expert/topic/596/596212.xml?temp=.491157 raptor (Raptor) () Reputation: 125? 2002-03-24 17: 08: 06Z? Score : 10 ??? In fact, the performance of the single user access is best, especially the amount of data is very large. Some people have tested, with the number of data volumes and tens of millions of records, the best performance database is FoxPro 2.5 for DOS, I have also tested, in the case of single users, Paradox / Access Performance is better than Interbase / SQL2000, and Oracle for Windows is the worst, and SQL2000 is faster than SQL7. ? ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━? ?? Reply: R3000 () () Reputation: 100? 2002-03-26 09: 30: 25Z? Score: 10 ??? (without c / s, only single users) This is a prerequisite . Everyone is optimistic. As long as your machine is good enough, the memory is large enough, and the number g, millions are still no problem. You have the following choices, no divorce .1.access 2.Paradox 3.SQL Server Desktop4.Oracle Lite 5.SQL Anywhere 6. Interbase 7.Mysql

I am basically used, each has advantages and disadvantages.

If considering the speed. 1. SQL Anywhere 2. MySQL 3.access Consider upgrading to c / s .1. Oracle Lite 2.sql Server Desktop 3.sql Anywhere Considering the Operation Interface Convenient 1.access 2.sql Server Desktop 3.sql Anywhere Conside Stable Safety 1 . SQL Anywhere 2.SQL Server Desktop 3.Mysql Consider support for SQL 1.oracle Lite 2.sql sever desktop 3. SQL Anywhere

Access, Paradox is a pure desktop database that only supports SQL subsets. MySQL, the operation is not convenient. Interbase is not well supported by SQL, such as UPDATE that does not support table associations. Not safe, the password is equivalent to the same.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━?? Reply: bruise () () reputation: 100? 2002-03-26 10: 36: 03z? Score: 5 ??? Nothing to test data, it seems that most of them are the same lazy . :-) One example: a SQL2000 running on W2K, with about 5 million data, a monthly growth amount of 50-700,000, a capacity of 1G, a billing statistics, take 3 minutes. Performance is barely. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━?? Reply: juqiang (Square gun (is cultivating saddle)) () reputation: 100? 2002-03-26 11: 02: 16z? Score: 0 ??? SQLServer, Oracle, Sybase, DB2 These can be! From a passenger lady in front of M $, it was known that: <100g is a small application 100g - 500G is a medium 500g or T, is a large application.

Domestic application, ordinary one year, up to 10 g! Such applications also not talk about data tuning. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━?? Hank (Star Farm) () Reputation: 115? 2002-03-26 12: 40: 10Z? Score: 10 ??? This should be based on your database and application connection method and your hardware configuration Decide!

Raptor (Raptor) is right, "In fact, in terms of performance, single user access is best, especially the amount of data is very large. Some people have tested, and the amount of data and tens of thousands of data in the number G In the case of the record number, the best performance of the performance is FoxPro 2.5 for DOS "You can test 30 million records, the FoxPro database is no problem, but the Access database is estimated to die!

However, FoxPro database maintenance is more troublesome. Once a garment project, the DBF file of the stock table reached a 250M, which maintained considerably, and currently use the FoxPro / VFP series to maintain maintenance, of course the speed is absolutely There is no problem!

I have conducted testing on a failure project (due to differences in the previous demand, using Access database), after all the system information of the customer is transferred to the Access database, the database reaches 1.2G, using Delphi 5.x ADO development, like a crash when opening an order database!

When the database capacity exceeds 600m, it is recommended not to use the Access database, there are many project experience waiting, there is no need to touch, and carefully consider the compression problem of the Access database. Of course, in theory, Access2000 format can be 4G, Access 97 format can be 1.2g (this forgot to see there)!

Looking at the landlord, the database capacity has reached 4G or more, currently in stand-alone mode, if you develop by Windows Delphi ADO, whether it is a C / S or MIDAS structure, database comparison is wise to choose SQL-Server / SQL Anywhere / Interbase and other medium databases, of course, you can also choose large databases such as Oracle / DB2 / Sybase, but there seems to be not necessary; if used in Linux, you can choose a range! However, once you involve the cost and copyright issues, you can only choose Interbase, because she is free, you can't sell it to a 10,000-yuan program light database, do hundreds of thousands!

<<

>>

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━

Http://search.9cbs.net/expert/topic/7/7197.xml?temp =.6089746

Reply to: ardus () () Reputation: 100? 2000-05-03 11: 26: 00Z? Score: 10 ??? 1. First rule out is caused by database table design difference; execute your query at the server side, Take a look at the speed, 2. Slow after the database design factor, you can consider putting the application server and the database server on the same machine; 3. Placing the same machine is slow, then you should do special purpose for your query Optimization process, such as users do not query conditions, you can define a segment in the program, but multiple queries that are set to TRUE, display the query result segment to the user. 4. If the user wants you to record tens of thousands of records on his screen, you can move him for a meal, then consider using multi-threaded queries, combine display. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━hhttp: //search.9cbs.net/expert/topic/7/7197.xml? Temp = .6089746?

Reply to: LWM8246 (LWM8246) () Reputation: 100? 2001-01-14 00: 17: 00Z? Score: 0 ??? 1) Do you need 200,000 records, do you need all fields? 2) TQuery joins the necessary selected conditions Mainly reduce the number of records 3) Can consider segmentation access by ADO 4) Planning the data structure put a large number of operations to the Server side, only reversal processing results to the Client

• Finally, it is highly recommended that you generally don't use a Table component, it will return all records !!!

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━?

Reply: TON2000 (small urchin XP) () reputation: 100? 2001-01-15 20: 20: 00Z? Score: 0 ??? 200,000 records to be a few minutes! ? ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━hhttp: //search.9cbs.net/expert/topic/821/821414.xml? Temp = .2205927

Reply to: Blue__star (Blue Boiling Point) () Reputation: 100? 2002-06-21 17:30: 29Z? Score: 50 ??? Reasonable Establishment Index, Partition Table 1. Rational use index index is an important data structure in the database, and its fundamental purpose is to improve query efficiency. Most of the database products are now using IBM's first ISAM index structure. The use of indexes is just right, and the principles of use are as follows: ● Inconditioning, but not specified as the column of the foreign key, and the unconnected field is automatically generated by the optimizer. ● Establish an index on the columns of frequent sorting or grouping (ie, GROUP BY or ORDER BY operation). ● Establish a search in columns that are often used in the conditional expression, do not establish an index on the columns of different values. For example, only two different values ​​of "male" and "female" in the "sex" column of the employee table, so it will not be necessary to establish an index. If the establishment index does not improve query efficiency, it will seriously reduce the update speed. ● If there are multiple columns to be sorted, a composite index can be established on these columns. ● Use system tools. If the Informix database has a TbCheck tool, you can check on the suspicious index. On some database servers, the index may fail or because of frequent operation, the read efficiency is reduced. If a query using the index is unknown, you can try the integrity of the index with the TbCheck tool, and fix it if necessary. In addition, after the database table updates a large amount of data, the index can be removed and reconstructed can increase the query speed. 2. Avoiding or simplifying sorts should be simplified or avoided to repeat the large table. When an output can be generated using an index to generate an output in an appropriate order, the optimizer avoids the step of sorting. The following is some influencing factors: In order to avoid unnecessary sorting, it is necessary to correctly enhance indexes, reasonably consolidate database tables (although sometimes it may affect the standardization of the table, but is worthy of efficiency). If sort is inevitable, you should try to simplify it, such as the range of zodes of sorting. 3. Eliminating sequential access to large table row data In nested queries, sequential access to tables may have fatal impact on query efficiency. For example, use sequential access strategy, a nest 3 query, if each layer queries 1000 lines, then this query is to query 1 billion row data. Avoiding the main method of this is to index the column of the connection. For example, two tables: student table (student number, name, age ...) and selection class (student number, course number, grade). If both tables are connected, they must establish an index on the "Learning" connection field. It is also possible to use and set to avoid sequential access. Although there are indexes on all check columns, some form of WHERE clause is forced optimizer to use sequential access. The following query will force the order to perform the order of the OrderS table: select * from Orders Where (Customer_Num = 104 and ORDER_NUM> 1001) or ORDER_NUM = 1008 Although the index is built in Customer_Num and ORDER_NUM, the optimizer is still used in the above statement Sequential access path scans the entire table.

Because this statement is to retrieve the collection of separate rows, it should be changed to the following statement: select * from Orders where customer_num = 104 and order_num> 1001 Union Select * from Orders where order_num = 1008 This can use the index path processing query. 4. Avoiding a column query of a column at the same time in the query in the inquiry and WHERE clause, then it is likely that the subquery must be re-query after the column value in the main query changes. The more nesting, the lower the efficiency, so you should try to avoid subquery. If the child query is inevitable, then filter out as much row as possible in the child query. 5. Avoid difficult forms of regular expressions Matches and Like keywords support wildcard matching, which is called regular expressions. But this match is particularly time consuming. For example: SELECT * from Customer WHERE ZIPCODE LIKE "98_ _ _" Even in this case, in this case, it is also possible to scan in order. If the statement is changed to SELECT * from customer where zipcode> "98000", you will use the index to query when you execute the query, obviously greatly improves the speed. In addition, it is necessary to avoid non-start substrings. For example, the statement: select * from customer where zipcode [2,3]> "80", the non-start substring is used in the WHERE clause, so this statement does not use an index. 6. Use a temporary table to accelerate the query to sort a subset of the table and create a temporary table, sometimes accelerating queries. It helps to avoid multiple sorting operations and simplify the work of optimizer in other ways. For example: SELECT cust.name, rcvbles.balance, ...... other columns FROM cust, rcvbles WHERE cust.customer_id = rcvlbes.customer_id AND rcvblls.balance> 0 AND cust.postcode> "98000" ORDER BY cust.name If the query to Multiple times, more than once, you can find all unpaid customers in a temporary file and sort by the customer's name: Select Cust.Name, Rcvbles.balance, ... Other Column from Cust, RCVBLES WHERE cust.customer_id = rcvlbes.customer_id aND rcvblls.balance> 0 oRDER BY cust.name INTO TEMP cust_with_balance then the following manner in the temporary table query: SELECT * FROM cust_with_balance WHERE postcode> main line than "98000" in the temporary table There are fewer columns in the table, and the physical order is the desired order, reducing disk I / O, so query workload can be greatly reduced. Note: The primary table is not modified after the temporary table is created. When data is frequently modified in the primary table, be careful not to lose data. 7. Using sorting to replace non-sequential access non-sequential disk access is the slowest operation, manifested in the back-page movement of the disk access arms. The SQL statement hides this situation so that we can easily write a query to access a large number of non-sequential pages when writing applications.

Sometimes, use the sort capability of the database to replace the sequential access to improve the query. Optimization is to select the most effective way to perform SQL statements. Oracle Optimizer Select It thinks the most effective way to perform SQL statements.

1. ? Is null and is not null If a NULL value exists, it does not improve performance even for the column. 2. • Write different SQL statements for different work. Writing a large SQL program for completing different work is not a good way. It tends to lead to the result of each task to not optimize. To complete different work, you should generally write different statement blocks than to write one. 3. • In and EXISTSSELECT NAME FROM EMPLOYEE WHERE NAME NOT IN (SELECT NAME FROM Student); SELECT NAME FROM EMPLOYEE WHERE NOT EXISTS; the first time the SQL statement is not as the second sentence. By using EXISTS, Oracle will first check the main query, then run the sub-query until it finds the first match, which saves time. Oracle first performs sub-queries when performing in sub-queries, and stores the result list in a temporary table added to an index. Before executing subqueries, the system will hang the primary query first. When the sub-query is executed, the primary query is stored in the temporary table. This is why using Exists is much faster than using IN usually queries. 4. ? NOT operator Select * from Employee WHERE SALY <> 1000; select * from Employee WHERE SALE SALY <1000 OR = "SALARY ="> 1000; The first sentence of the first SQL statement is not as good as the second sentence, because the second sentence SQL statements can use indexes. 5. The ORDER BY statement ORDER BY statement is low because it is to be sorted. Avoid using expressions in the Order By sentence. 6. • Column connection Select * from Employee where name || department = 'zyzbioInfo'; select * from Employee where name = 'zyz' and divartment = 'bioinfo'; these two queries, the second sentence is fast than the first sentence, Because the Oracle optimizer does not use indexes for queries with connection operators '||'. 7. Wildcard '%' When the wildcard appears in the search word, the Oracle optimizer does not use the index. Select * from Employee WHERE Name Like '% z%'; select * from Employee WHERE Name Like 'Z%'; the execution efficiency of the second sentence is fast than the first sentence, but the query result set may be different. 8. • Try to avoid mixed types of expression. Suppose the field studENTNO is VARCHAR2 Type Statement Select * from student where studentno> 123; Oracle will have an implicit type conversion. The implicit type conversion may make the Oracle optimizer ignore the index. At this time, the explicit type should be used to convert the select * from student where studentno = to_char (123). 9. Distinct ?? DistINCT always creates a sort, so the query speed is also slow.

The optimization method currently mentioned is the optimization of the application level. Of course, it is also the most important optimization means. In the system level, you need to optimize the distribution of the disk, and the tablespace of different purposes will be established to different physical disks; Optimization of the segment; the log file is optimized, the log file size corresponding to the transaction is established; focusing on improving the I / O performance of the system .. Of course, you can do it on the system level, reasonable allocation table space, try not to I have no necessary objects in the system table space, I think it is also a suggestion. There should be more questions from the design. For example, use less cursors, less dynamic SQL, less like a fuzzy query such as Like, etc.. Query optimization is to use the index, it is recommended to analyze the table in order to perform COST-based query optimization. In WHERE, execution is generally retrograde, first performing the last condition, so the maximum impact is to put it in the last, in the multi-column index, the more predetermined column, the searches, the more favorable ━━━━━━ ━━━━━━━━━━━━━━━━━ ?━━━━━━━━??? ?━━━━━??? ?━━━━??? Http: //search.9cbs.net/expert/topic/908/908740.xml?temp=.1650507?? Reply to: weekendw (old bird) () Reputation: 97? 2002-07-31 10: 06: 00Z? Score : 35 ??? I don't know if you know the working principle of AdoQuery. If you know, please don't blame me, ADOQuery has several important properties that determine its working mode and work efficiency. CursorLocation, Cachesize, Cursortype.

??? CursorLocation determines the data access mode of AdoQuery, which is divided into two modes of Server-Side and Client-Side. If you choose Server-Side (set the CursorLocation property to CluseServer) Then when you use AdoQuery query data The result of the query will be saved in the ADO cache of the data source or in the cache stored in the data source itself, (if your data source is the result of SQL Server, the result will be saved in the SQL Server cache), data source Just pass the cachesize size of AdoQuery to the client's ADO engine, and store the client's ADO cache for the common application process. If you specify the cachesize 10, then the data source only passes back 10 data to the client, If the client wants to handle the eleventh data, you need to apply 10 data to the server-side ADO engine. Client-Side Mode (CLUSECLIENT) then passes all the query results into the client's ADO cache, and then controls the client application processing again by the client ADO CURSOR. Server-Side mode will take up a larger resource of the server, and the client needs to apply to the next data to the server each after processing, so the speed is slower, and all the data of Client-Side has passed to the local Therefore, only one transmission is required, and no longer need to apply to the server in the future, but a large amount of data is required in the first application, so the waiting time is longer. Generally, the mode of CLIENT-SIDE StaticCursor is recommended, and the appropriate adjustment of Cachesize (about 500-1000) is adjusted.

Let's take a look at your problem, and a query returns more than 90 million records to record this query should be unreasonable, let us calculate it, if a record length is 100 bytes. 900 * 1024 * 100 = 90MB! If your application is no longer the same machine with the server, 90M data is only very considerable to network transmission time. So it is recommended to use multiple small batch queries to replace all records of the record, you can check the N times to return to a record every time you check the N times. Practice proves that this query model is faster than one record (my actual experience). ?? ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━?? Http://search.9cbs.net/expert/topic/1458/1458502.xml?temp=.6585199

Reply to: atang (A Tang) () Reputation: 91? 2003-02-24 10: 53: 25Z? Score: 0 ??? My main table has more than 100,000 records, two from the table Tens of thousands of records, after close, in Open, in the way, ask a problem again, the program takes a long time when starting, I set the cursor on the server side, and I will be wrong from the table when opening: "Current program I don't support the interface you must sort and filter, "I tried to start asynchronous mode, but there is no change in the speed ?? TOP ?? Reply to: Chenxiyu21th (希 瑜) () Reputation: 98? 2003-02-24 12:39 : 57Z? Score: 0 ??? Use a third-party control to operate the database, what data is used, what data is called to the client memory, start fast?? TOP ?? Reply to: jade007 (inquiry notice) () Reputation: 99? 2003-02-25 00: 00: 15Z? Score: 0 ??? I also encountered such a problem. The solution is very simple, as long as you set a primary key in the database, you will ask me again. ?? Top ?? Reply to: jade007 (inquiry notice) () Reputation: 99? 2003-02-25 00: 03: 33Z? Score: 0 ??? Record more words suggest you with a large database, such as Oracle, I opened a table for the 120,000 records for only 3 seconds, unknown. ?? TOP ?? Reply to: KING_0119 (Wisdom) () Reputation: 99? 2003-02-25 07: 59: 54z? Score: 0 ??? Excuse me what database is not using the ADO, MS is not I have a problem with the ADO even Oracle, the same problem is that ADO and Oracle are not solved so far, so it is best to choose Borland's dbexpress, but not yet support MsAccess.

?? TOP ?? Reply: lijx18 (lijx) () Reputation: 100? 2003-02-25 16: 51: 46z? Score: 0 ??? None of the database table?? TOP ?? Reply to: atang A soup) () reputation: 91? 2003-02-27 13: 17: 25Z? Score: 0 ??? has a primary key! But I have resolved, I have to have my own session, using multiple connections, there is no such problem, just the speed is still very slow, I use SQL-Server, 10M bandwidth?? TOP ?? Reply to people : NNWQ (仔) () reputation: 97? 2003-03-17 23: 11: 39z? Score: 0 ??? There is no main key, after adding records and saves, if it is still modified again Error! ! ! Unless you must reopen the data sheet. Is this a big bug of ADO? ? ? ! ! ! (I use ADO MDB), this question makes me a headache, especially when using cache updates (it is impossible to reopen the table). ?? Top ?? Reply to: atang (阿汤) () Reputation: 91? 2003-03-25 10: 35: 58Z? Score: 0 ??? I have happened now! Oh! Who helped me! ?? TOP ?? This question has been posting? ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━━━? Http://search.9cbs.net/expert/topic/2613/2613279.xml?temp =.4357111? Topic:? TQuery and Ttable performance comparison On the question:? DelphiToby (Mr. devil) grade:? ??? credit value:? 95 belongs forum:? Delphi DataBase questions points:? 50 replies:? 16 Posted:? 2003-12-29 11: 33: 20Z ??

I have 100,000 records in my Paradox database.

Use TTable to open all records, just 3 seconds to open all records with TQuery, take 2 minutes

Is there any way to improve the performance of TQuery, let TQuery open 100,000 records can be controlled within 15 seconds? Be sure to open all the records of the database!

? ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━? Http://search.9cbs.net/expert/topic/425/425497.xml?temp=.9340326 newmanjb (cloth knight) () reputation: 98? 2001-12-17 13: 53: 43z ? Score: 50 ?? -------------------------------------- ??? I I tried the operation of 240,000 records, using AdoQuery.Requery and refreshed very slowly, and adoquery.refresh will not be refreshed at all. So simply set Active to false and set it to True, which is much faster (6-8 seconds when the workstation is refreshed, 6-20 seconds, I use MSSQL2000, CPU 600.

Other ways to set ADOQUERY to asynchronously transfer data, which will react to the user's operation. Another progress bar shows the progress of the refresh. ??? I know that this method is not a good way, so I pay attention to this problem, I hope to have a master to enlighten me. ---------------------------------------- ━━━━━━━━━━━━━━ ━━━━━━━━━━━━━━━━━━━━━━?? ?━━???????????????? Http: // search. 9cbs.net/expert/topic/483/483353.xml?temp=5.439395e-02 topic:? I am very happy to be surprised by Access! ! ! ! ! So interested! ! ! ! Continue to send a big discussion! ! ! Author:? Biggo (biggo) Grade:? ??? credit value:? 98 belongs forum:? VC / MFC non-technical questions Points:? 0 times Re:? 28 Posted:? 2002-01-17 12:39 : 16Z? ????? I didn't expect everyone to be interested in Access, short-day visitors nearly 8,000 replied to 288 people! As a question, combined with your actual experience, you will summarize Access! First of all, I personally, I may not know about Access, and I hope that everyone will forgive me! ! ! ! ! !

Simply believes that Access cannot manage a large amount of data, the database exceeds a certain M reduces performance and there will be inexplicable errors ... I think it is a one-sided view, my actual experience is as follows.

The actual usage of my customers is as follows: The factory staff has 600 people, 1 statistics per person per day, and the minimum number of 4 final records per day, there may be a quite part of the person who has 6 final records, detailed records 5 of the fields, 67 fields of statistical records, of course, there are still a few sheets, and those records are not much, except for 600 records in the employee form, generally have a maximum of more than ten or dozens. 600 statistics will be generated one day, 2800 franchise records have been working hard a year, and the day after 365 days will produce 200,000 statistical records, 900,000 detailed records. If it is no more compressed data, it is already the amount of data of a G., but now the customer's software work is still normal, and the P3-800 machine is used. So long and huge amounts of data have not seen the customer saying what errors and data crashes when using software!

So the one-sided considered that access management does not have a large amount of data, which may have a recognition of the desktop database to stay in the classroom and FoxBase. Access can be said to be an excellent desktop database.

This system I use VC written, and the OLE DB operation database directly using Jet4.0 is not using ADO. Many people complain that ACCESS data is large, and the program has problems. I think most of the problems appear on your program structure, especially using Delphi's database automation programming, due to software automation, make programmers ignored Operating data is normative, often accessed several records, but the client should traverse all records in the client when the software is executed, so that the data is more data, the execution speed and efficiency of the program are significantly reduced. I am in the middle of the program, strictly implement a standard, which is how much data is to take from the database, I will never one! Therefore, in addition to printing a detailed data, it is possible to operate thousands of tens of thousands of records. The usual data for each residing client is usually about hundreds of hundreds of hundreds of hundreds, which will never be traversed on the customer segment. Statistics tables and charters, otherwise my system has long been finished. Therefore, when you write the database program, you must work hard in the program specification. Don't run the program is not normal.

Since my program is super-made, there is no problem, then do you want to think about SQLServer Sybase Interbase DB2 in your mind. My system is actually a small system, but the amount of data is large. The relationship within the database is not complicated, and there is no need to use those enterprise databases. If you need, I can use SQLServer under the upgrade, as long as it is changed to the database connection parameters.

Of course, those large databases provide many accessible but very attractive features, all of which can be all-made. But I suggest that if your system is not a real enterprise application, just some small features software, directly use Access, the amount of data is not related. Of course, you attach great importance to data security, not afraid of software costs, even though use SQLServer, it is another thing.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━?

Http://search.9cbs.net/expert/topic/597/597106.xml?temp =.578853860 million record Access databases can be opened instantly, and I need a few minutes in ADO, what? Author:? CsdnKey (function) Grade:? ??? credit value:? 99 belongs forum:? Delphi DataBase questions Points:? 100 Replies:? 9 Posted:? 2002-03-24 22: 22: 40Z ?? ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????

More than 600,000 record Access databases, you can open with Access instantly, and I need a few minutes in ADO, what? May be the problem of controlling the amount of query,

I use SELECT TOP 100 ... to limit the number of records, the speed is improved, but the lower 100 will not know how to read it!

How can I solve it? ? I hope everyone can help, thank you! ! ??? Replices: my_first (small @ _ 小) () reputation: 99? 2002-03-24 22: 25: 06z? Score: 50 ??? You look at this is paged down data, At night, I just got to the upper pagination, and the next page is simple to get a little bit.

Down SELECT TOP 100

From table1

WHERE

>

ORDER BY

Up, give you a thinking. Set up a page paging mark, first descend, remove the number of records you want, and then ascertrace. Get

Look below, this idea I ask people to spend 300 points.

SELECT TOP

From table1 where

In (

? SELECT TOP

From table1

? Where?

<=

ORDER BY

DESC

)

ORDER BY

I took a night in the evening, just passed, too excited.

I used to show all the data, so that the software speed is slow. It's so much now.

?? Top ?? Reply to: my_first (small @ _ @ 小) () reputation: 99? 2002-03-24 22: 25: 28Z? Score: 0? ?? down SELECT TOP 100

From table1

WHERE

>

ORDER BY

Then let me tell you.

Is a key, such as numbered BH, number must be sorted. 0-3000

First you take out the first record of the table1 table. Store in a variable,

VAR

TMPLB: STRING;

Begin

Adotable1.first; TMPLB: = adotable1.fields.fieldbyName ('bh'). asstring; ?? // The first record is also the record of the value BH = 0 to the TMPLB variable.

SELECT TOP 100 * from? Table1where BH> TMPLB / / This is the pagination mark you define, ?? WHERE BH> 0? Order by BH

END; When you click on the next page, take the value of the 100 records to TMPLB.

SELECT TOP 100 * from? Table1where BH> TMPLB / / This is the pagination mark you define, ?? WHERE BH> 100? Order by BH

This is to take 100-200

It is not necessary to make a cycle.

Top ?? Reply to: zbsfg () () Reputation: 98? 2002-03-24 22: 32: 01z? Score: 50 ??? Your engine setting is wrong, Delphi is a client, but for Access Server mode is much more, 600,000 records are open as long as 1 seconds. Tadoconnection.cursorLocation is just fine, but for MSSQL, use CLUSECLIENT very fast ?? Top ?? Reply to: Windindance (Wind Dance Early) () Reputation: 90? 2002-03-24 22: 35: 14Z? Score : 0 ??? Learning?? TOP ?? Reply to: csdnkey () Reputation: 99? 2002-03-24 22: 44: 13Z? Score: 0 ??? to: my_first (waves) First thank you ! I am not necessarily sorted by BH, what is it? ?? Top ?? Reply to: my_first (small @ _ @ 小) Reputation: 99? 2002-03-24 22: 57: 11Z? Score: 0 ??? Have one is the sorted field, not necessarily Number, or other fields. ?? TOP ?? Reply to: my_first (small @ _ @ 小) () Reputation: 99? 2002-03-24 22: 58: 16Z? Score: 0???

This key must be sorted.

Adotable1.first; tmplb: = adotable1.fields.fieldbyname ('bh'). asstring; // The number here is changed to your own defined sort button?

?

TOP

?

? Reply to: csdnkey (function) () Reputation: 99? 2002-03-24 23: 06: 33z? Score: 0

?

?

?

To zbsfg ():

? Your method is OK! Thank you!

To my_first (surf): My database does have no fixed sort field, so you can't use your method but also thank you very much!

?? TOP ?? Reply to: Clark_x (feng) () Reputation: 89? 2002-08-30 15: 52: 50Z? Score: 0 ??? What to do? ??

?

?

?

??

?

?

??

转载请注明原文地址:https://www.9cbs.com/read-13929.html

New Post(0)