Delphi is a quick application development tool and is deeply loved by the programmer. Its powerful component functions allow programmers to easily and efficiently complete common interface development, database applications such as database applications. However, the relative lack of help makes many components that do not use people to use it correctly, and they are still aware of the problem. For the core components in Midas development, TclientDataSet and TDataSetProvider, due to the lack of information, most of the book content is Li Wei's book content. I have been fortunate to meet the Cary Jensen's Professional Developer Series article on the BDN, and explain Delphi's database development technology. At present, the CLIENTDATASET section is selected, share with you.
ClientDataSet is a powerful class that implements powerful functions that other data set components are achieved by simulating tables in memory. This component is only available in Delphi and C Builder Enterprise Edition, and now, all products of Borland (including the latest Kylix) are integrated with TclientDataSet components.
TclientDataSet is looking at the inheritance relationship of the class, which is a subclass of the abstract class of TDataSet, so we can perform our familiar operations, such as navigation, sort, filtering, and editing at TDataSet Abstract hierarchies. It should be noted that TclientDataSet uses a new technology that puts all the data in memory, so TclientDataSet is a "virtual table" that exists in memory, so the operation of the database is very fast. The operation of building an index of 100,000 records on the PIII 850, 512MB machine is less than half a minute.
Unlike general data set components, TclientDataSet uses the technique that is specifically, in accordance with the principle of high speed, low storage requirements, and two data storage sources are used inside the TclientDataSet. The first is its DATA property, which is the view of the current memory data reflects all data changes. If the user deletes a record from the data, this record will disappear from DATA, and after adding a new record, this record has a DATA property.
Another data source is Delta attribute, so that the names, that is, incrementally, this property reflects the change of data. Whether you add a new or delete record to the Data property, you will be recorded in Delta. If you modify the record in Data, you will save two corresponding records in Delta, one is the original record, and the other contains only the modified field. value. Because Delta's existence and TClientDataSet are recorded in memory, all changes are not immediately updated in the corresponding physical storage, and can be restored when appropriate, so TclientDataSet has buffer update features.
In order to update the data store source, we want to call the corresponding method in the TclientDataSet. If ClientDataSet is associated with DataSetProvider, you only need to call the TclientDataSet ApplyUpdates method to save the data update, but if the TclientDataSet does not have the corresponding TDataSetProvider, but directly related to the file, then this way is very interesting, we are in briefcase This problem will be explained again in the model. At this point, Delta will remain in the use of TclientDataSet SaveTofile and LoadFromFile. After calling MergeChangelog and ClearChange, Delta's content will be emptied. It is only the former combines Delta's data as Data, and the change is stored on the physical media, while ClearChanges are all emptied, and the data will be replied to the original state. Most applications are used in conjunction with TClientDataSet with TDataSetProvider. The actions used in combination reflect the design tenet of Borland, which is to provide a way of distributed environments. Let's explain slowly below.
When we set the Active property of the TclientDataSet object to True or call its Open method, the clientDataSet sends a packet request to DataSetProvider. So DataseTProvider opens the corresponding dataset, pointing the record pointer to the first record, then scans from the head until tail. For each record of scanning, it will be encoded into a Variant array, and we usually call it a packet. After completing the scan, DataSetProvider closes the pointing dataset and passes all of these packets to the clientDataSet. In the demo program I offer, you can clearly see this behavior (after all, see it is true!). The DBGRID on the right side of the program main interface is connected to a data source that points to the database table, and DataSetProvider points to this table. When the ClientDataSet | Load menu item is selected, you can see the data of the table is sequentially scanned. Once the last record is reached, the table will be turned off, and the DBGRID on the right is emptied, and the DBGRID reflects the clientDataSet data to display Data in memory. Since this process is reflected on the DBGRID, there is less than 1000 records of removal time, most of them are wasted on the screen update display, you can choose ClientDataSet | View Table Loading to disable display, and achieve the purpose of acceleration .
In the above description, we did not mention an important link, that is, how the packet is restored to a table. That is because DataSetProvider decodes the metadata in the packet, according to metadata (which we can understand the structure of the data table), the memory virtual table is constructed as the physical data table. However, it should be noted that although DataSetProvider points to data tables may have multiple indexes, this information will not be placed in packets, in other words, the data among the clientDataSet is not indexed by default. But because ClientDataSet has a behavior that is consistent with TDataSet, we can rebuild indexes as needed here.
After the data in the ClientDataSet is modified, you can submit this change to the physical data table persistence. This work is done by DataSetProvider. The internal working principle is: DataSetProvider creates an instance of TSQLResolver, which generates the SQL statement to perform changes on the underlying data. In detail, the corresponding SQL statement is generated for each of the modified logs. The generation of this statement can also be controlled by user control, and the DataSetProvider's UpdateMode property and the ProviderFlags attribute in the ClientDataSet have an impact on the generation of SQL statements. Of course, you can also change one way, taking the same data direct operating mechanism as a single or C / S structure, bypasses the SQL statement and the buffer update mechanism to modify the database. Simply set the resolvetodataset property to True, then DataSetProvider does not use TSQLResolve when persistence updates, but directly modify the physical data source. That is, locate the record to be deleted, call the delete statement, position to modify the record, call the modified statement. We can modify the demo program to observe such behavior. Please change the resolvetodataset attribute of DataSetProvider in the demo program to TRUE, run. Modify the data in the interface and save it, you will see the navigation button on the right will become available in an instant.
More than a wonderful thing is that Borland takes into account the diversity of the application, providing us with the BeforeupDateRecord event, so that when DatasetProvider is operated for each modified log, we can add your own processing in this event. For applications such as "encryption operation", "commercial sensitive data processing", which greatly facilitates programmers, allowing programmers to have complete control over data. The complexity of distributed environments has put forward higher requirements for data access, so use transactions to ensure that data integrity and consistency are very necessary. Borland takes this, when calling clientDataSet Applyupdates, you can Pass an integer value to indicate the number of errors that can tolerate. If your data is very strict, you can pass 0 values, so that DataSetProvider opens a transaction when applying modification. If you encounter an error, you will fall back to this transaction, and the modification log will remain in, and the error record tag Out, I will finally trigger the OnReconcileError event. If a number greater than 0 is passed, the transaction will be submitted when the number of error occurs, the transaction will be submitted, and the record that causes the submission failed will remain in Delta, and the successful record will be from the modification log delete. If the number of error reaches the specified value, the transaction will fall back, the result is 0 in the same value of 0. If the value is negative, the data that can be submitted is submitted, and the uncommitted data is still saved in the modification log, and the error record is marked.