Thursday, April 30, 2009

Using local transactions

The BDE supports local transactions against local Paradox, dBASE, Access, and FoxPro tables. From a coding perspective, there is no difference to you between a local transaction and a transaction against a remote database server.
When a transaction is started against a local table, updates performed against the table are logged. Each log record contains the old record buffer for a record. When a transaction is active, records that are updated are locked until the transaction is committed or rolled back. On rollback, old record buffers are applied against updated records to restore them to their pre-update states.

Local transactions are more limited than transactions against SQL servers or ODBC drivers. In particular, the following limitations apply to local transactions:

Automatic crash recovery is not provided.

Data definition statements are not supported.

Transactions cannot be run against temporary tables.

For Paradox, local transactions can only be performed on tables with valid indexes. Data cannot be rolled back on Paradox tables that do not have indexes.

Only a limited number of records can be locked and modified. With Paradox tables, you are limited to 255 records. With dBASE the limit is 100.

Transactions cannot be run against the BDE ASCII driver.

TransIsolation level must only be set to tiDirtyRead.

Closing a cursor on a table during a transaction rolls back the transaction unless:

Several tables are open.

The cursor is closed on a table to which no changes were made.

Using passthrough SQL

With passthrough SQL, you use a TQuery, TStoredProc, or TUpdateSQL component to send an SQL transaction control statement directly to a remote database server. The BDE does not process the SQL statement. Using passthrough SQL enables you to take direct advantage of the transaction controls offered by your server, especially when those controls are non-standard.
To use passthrough SQL to control a transaction, you must

Install the proper SQL Links drivers. If you chose the “Typical” installation when installing C++Builder, all SQL Links drivers are already properly installed.
Configure your network protocol correctly. See your network administrator for more information.
Have access to a database on a remote server.
Set SQLPASSTHRU MODE to NOT SHARED using the SQL Explorer. SQLPASSTHRU MODE specifies whether the BDE and passthrough SQL statements can share the same database connections. In most cases, SQLPASSTHRU MODE is set to SHARED AUTOCOMMIT. However, you can’t share database connections when using transaction control statements. For more information about SQLPASSTHRU modes, see the help file for the BDE Administration utility.

NB:When SQLPASSTHRU MODE is NOT SHARED, you must use separate database components for datasets that pass SQL transaction statements to the server and datasets that do not.

Using a database component for transactions

When you start a transaction, all subsequent statements that read from and write to the database occur in the context of that transaction. Each statement is considered part of a group. Changes must be successfully committed to the database, or every change made in the group must be undone.
Ideally, a transaction should only last as long as necessary. The longer a transaction is active, the more simultaneous users that access the database, and the more concurrent, simultaneous transactions that start and end during the lifetime of your transaction, the greater the likelihood that your transaction will conflict with another when you attempt to commit your changes.
When using a database component, you code a single transaction as follows:

1.Start the transaction by calling the database’s StartTransaction method.

2.Once the transaction is started, all subsequent database actions are considered part of the transaction until the transaction is explicitly terminated. You can determine whether a transaction is in process by checking the database component’s InTransaction property. While the transaction is in process, your view of the data in database tables is determined by you transaction isolation level.
3.When the actions that make up the transaction have all succeeded, you can make the database changes permanent by using the database component’s Commit method.

Commit is usually attempted in a try...catch statement. That way, if a transaction cannot commit successfully, you can use the catch block to handle the error and retry the operation or to roll back the transaction.
4If an error occurs when making the changes that are part of the transaction, or when trying to commit the transaction, you will want to discard all changes that make up the transaction. To discard these changes, use the database component’s Rollback method.

Rollback usually occurs in

Exception handling code when you cannot recover from a database error.
Button or menu event code, such as when a user clicks a Cancel button.

Explicitly controlling transactions

There are two mutually exclusive ways to control transactions explicitly in a BDE-based database application:
Use the methods and properties of the database component,
such as StartTransaction, Commit, Rollback, InTransaction, and TransIsolation. The main advantage to using the methods and properties of a database component to control transactions is that it provides a clean, portable application that is not dependent on a particular database or server.

Use passthrough SQLin a query component to pass SQL statements directly to remote SQL or ODBC servers. For more information about query components, see "Working with queries.” The main advantage to passthrough SQL is that you can use the advanced transaction management capabilities of a particular database server, such as schema caching. To understand the advantages of your server’s transaction management model, see your database server documentation.
One-tiered applications can‘t use passthrough SQL. You can use the database component to create explicit transactions for local databases. However, there are limitations to using local transactions. For more information on using, local transactions.
When writing two-tiered applications (which require SQL links), you can use either a database component or passthrough SQL to manage transactions.

Tuesday, April 28, 2009

Initializing the thread

Use the constructor to initialize your new thread class. This is where you can assign a default priority for your thread and indicate whether it should be freed automatically when it finishes executing.

Assigning a default priority

Priority indicates how much preference the thread gets when the operating system schedules CPU time among all the threads in your application. Use a high priority thread to handle time critical tasks, and a low priority thread to perform other tasks. To indicate the priority of your thread object, set the! Priority property. Priority values fall along a seven point scale, as described in the following table:

Value Priority
tpIdle The thread executes only when the system is idle. Windows won't interrupt other threads to execute a thread with tpIdle priority.
tpLowest The thread's priority is two points below normal.
tpLower The thread's priority is one point below normal.
tpNormal The thread has normal priority.
tpHigher The thread's priority is one point above normal.
tpHighest The thread's priority is two points above normal.
tpTimeCritical The thread gets highest priority.
Warning: Boosting the thread priority of a CPU intensive operation may “starve” other threads in the application. Only apply priority boosts to threads that spend most of their time waiting for external events.

The following code shows the constructor of a low-priority thread that performs background tasks which should not interfere with the rest of the application’s performance:

//---------------------------------------------------------------------------

__fastcall TMyThread::TMyThread(bool CreateSuspended): TThread(CreateSuspended)
{
Priority = tpIdle;
}

//---------------------------------------------------------------------------

Indicating when threads are freed

Usually, when threads finish their operation, they can simply be freed. In this case, it is easiest to let the thread object free itself. To do this, set the FreeOnTerminate property to true.
There are times, however, when the termination of a thread must be coordinated with other threads. For example, you may be waiting for one thread to return a value before performing an action in another thread. To do this, you do not want to free the first thread until the second has received the return value. You can handle this situation by setting FreeOnTerminate to false and then explicitly freeing the first thread from the second.


Кения сафари, противомалярийные таблетки, индивидуальный тур, тур экономическим, бюджетным Сафари , Вопрос Кении Сафари, пляжный отдых,
Kenya Affordable Safaris

Distributing database applications

C++Builder provides support for creating distributed database applications using the MIDAS technology. This powerful technology includes a coordinated set of components that allow you to build a wide variety of multi-tiered database applications. Distributed database applications can be built on a variety of communications protocols, including DCOM, TCP/IP, and OLEnterprise.

Creating multi-tiered applications
A multi-tiered client/server application is partitioned into logical units which run in conjunction on separate machines. Multi-tiered applications share data and communicate with one another over a local-area network or even over the Internet. They provide many benefits, such as centralized business logic and thin client applications.
In its simplest form, sometimes called the “three-tiered model,” a multi-tiered application is partitioned into thirds:

Client application: provides a user interface on the user’s machine.
Application server: resides in a central networking location accessible to all clients and provides common data services.
Remote database server: provides the relational database management system (RDBMS).

In this three-tiered model, the application server manages the flow of data between clients and the remote database server, so it is sometimes called a “data broker.” With C++Builder you usually only create the application server and its clients, although, if you are really ambitious, you could create your own database back end as well.
In more complex multi-tiered applications, additional services reside between a client and a remote database server. For example, there might be a security services broker to handle secure Internet transactions, or bridge services to handle sharing of data with databases on platforms not directly supported by C++Builder.

C++Builder support for multi-tiered applications is based on the Multi-tier Distributed Application Services Suite (MIDAS)

Monday, April 27, 2009

Understanding MIDAS technology

MIDAS provides the mechanism by which client applications and application servers communicate database information. Using MIDAS requires MIDAS.DLL, which is used by both client and server applications to manage datasets stored as data packets. Building MIDAS applications may also require the SQL explorer to help in database administration and to import server constraints into the Data Dictionary so that they can be checked at any level of the multi-tiered application.

Note:You must purchase server licenses for deploying your MIDAS applications.
MIDAS-based multi-tiered applications use the components on the MIDAS page of the component palette, plus a remote data module that is created by a wizard on the Multitier page of the New Items dialog.

remote data modules
Specialized data modules that work with a COM Automation server to give client applications access to any providers they contain. Used on the application server.

provider component
A data broker that provides data by creating data packets and resolves client updates. Used on the application server.

client dataset component
A specialized dataset that uses MIDAS.DLL to manage data stored in data packets.

connection components
A family of components that locate the server, form connections, and make the IAppServer interface available to client datasets. Each connection component is specialized to use a particular communications protocol.

Granting permission to access and launch the application server

Requests from the InternetExpress application appear to the application server as originating from a guest account with the name IUSR_computername, where computername is the name of the system running the Web application. By default, this account does not have access or launch permission for the application server. If you try to use the Web application without granting these permissions, when the Web browser tries to load the requested page it times out with EOLE_ACCESS_ERROR.
Note: Because the application server runs under this guest account, it can’t be shut down by other accounts.

To grant the Web application access and launch permissions, run DCOMCnfg.exe, which is located in the System32 directory of the machine that runs the application server. The following steps describe how to configure your application server:

1.When you run DCOMCnfg, select your application server in the list of applications on the Applications page.

2.Click the Properties button. When the dialog changes, select the Security page.

3.Select Use Custom Access Permissions, and press the Edit button. Add the name IUSR_computername to the list of accounts with access permission, where computername is the name of the machine that runs the Web application.

4.Select Use Custom Launch Permissions, and press the Edit button. Add IUSR_computername to this list as well.

5.Click the Apply button.

Building an InternetExpress application

The following steps describe how to build a Web application that creates HTML pages for allowing users to interact with the data from an application server via a javascript-enabled Web browser.
1.Choose File|New to display the New Items dialog box, and on the New page select Web Server application.

2.From the MIDAS page of the component palette, add a connection component to the Web Module that appears when you create a new Web server application. The type of connection component you add depends on the communication protocol you want to use.

3.Set properties on your connection component to specify the application server with which it should establish a connection.

4.Instead of a client dataset, add an XML broker from the InternetExpress page of the component palette to the Web module. Like TClientDataSet, TXMLBroker represents the data from a provider on the application server and interacts with the application server through its IAppServer interface. However, unlike client datasets, XML brokers request data packets as XML instead of as OleVariants and interact with InternetExpress components instead of data controls.

5.Set the RemoteServer property of the XML broker to point to the connection component you added in step 2. Set the ProviderName property to indicate the provider on the application server that provides data and applies updates.

6.Add a MIDAS page producer to the Web module for each separate page that users will see in their browsers. For each MIDAS page producer, you must set the IncludePathURL property to indicate where it can find the javascript libraries that augment its generated HTML controls with data management capabilities.

7.Right-click a Web page and choose Action Editor to display the Action editor. Add action items for every message you want to handle from browsers. Associate the page producers you added in step 6 with these actions by setting their Producer property or writing code in an OnAction event handler.

8.Double-click each Web page to display the Web Page editor. (You can also display this editor by clicking the ellipsis button in the Object Inspector next to the WebPageItems property.) In this editor you can add Web Items to design the pages that users see in their browsers.

9.Build your Web application. Once you install this application with your Web server, browsers can call it by specifying the name of the application as the scriptname portion of the URL and the name of the Web Page component as the pathinfo portion.

Using the javascript libraries

The HTML pages generated by the InternetExpress components and the Web items they contain make use of several javascript libraries that ship with C++Builder:

xmldom.js
This library is a DOM-compatible XML parser written in javascript. It allows parsers that do not support XML to use XML data packets. Note that this does not include support for XML Islands, which are supported by IE5 and later.

xmldb.js
This library defines data access classes that manage XML data packets and XML delta packets.

xmldisp.js
This library defines classes that associate the data access classes in xmldb with HTML controls in the HTML page.

xmlerrdisp.js
This library defines classes that can be used when reconciling update errors. These classes are not used by any of the built-in InternetExpress components, but are useful when writing a Reconcile producer.

xmlshow.js
This library includes functions to display formatted XML data packets and XML delta packets. This library is not used by any of the InternetExpress components, but is useful when debugging.

These libraries can be found in the Source/Webmidas directory. Once you have installed these libraries, you must set the IncludePathURL property of all MIDAS page producers to indicate where they can be found.
It is possible to write your own HTML pages using the javascript classes provided in these libraries instead of using Web items to generate your Web pages. However, you must ensure that your code does not do anything illegal, as these classes include minimal error checking (so as to minimize the size of the generated Web pages).

The classes in the javascript libraries are an evolving standard, and are updated regularly. If you want to use them directly rather than relying on Web items to generate the javascript code, you can get the latest versions and documentation of how to use them from CodeCentral available through community.borland.com.

Creating the client application

In most regards, creating a multi-tiered client application is similar to creating a traditional two-tiered client. The major differences are that a multi-tiered client uses

A connection component to establish a conduit to the application server.
One or more TClientDataSet components to link to a data provider on the application server. Data-aware controls on the client are connected through data source components to these client datasets instead of TTable, TQuery, TStoredProc or TADODataSet components.

To create a multi-tiered client application, start a new project and follow these steps:
1.Add a new data module to the project.

2.Place a connection component on the data module. The type of connection component you add depends on the communication protocol you want to use.

3.Set properties on your connection component to specify the application server with which it should establish a connection. To learn more about setting up the connection component.

4.Set the other connection component properties as needed for your application. For example, you might set the ObjectBroker property to allow the connection component to choose dynamically from several servers. For more information about using the connection components.

5.Place as many TClientDataSet components as needed on the data module, and set the RemoteServer property for each component to the name of the connection component you placed in Step 2.

6.Set the ProviderName property for each TClientDataSet component. If your connection component is connected to the application server at design time, you can choose available application server providers from the ProviderName property’s drop-down list.

7.Create the client application in much the same way you would create any other database application. You will probably want to use some of the special features of client datasets that support their interaction with the provider components on the application server.

Creating an Active Form for the client application

1.Because the client application will be deployed as an ActiveX control, you must have a Web server that runs on the same system as the client application. You can use a ready-made server such as Microsoft’s Personal Web server or you can write your own using the socket components described in "Working with sockets.”

2.Create the client application following the steps described in "Creating the client application.” except start by choosing File|New|Active Form, rather than beginning the client project as an ordinary C++Builder project.

3.If your client application uses a data module, add a call to explicitly create the data module in the active form initialization.

4.When your client application is finished, compile the project, and select Project | Web Deployment Options. In the Web Deployment Options dialog, you must do the following:

-On the Project page, specify the Target directory, the URL for the target directory, and the HTML directory. Typically, the Target directory and the HTML directory will be the same as the projects directory for your Web Server. The target URL is typically the name of the server machine that is specified in the Windows Network|DNS settings.
-On the Additional Files page, include midas.dll with your client application.

5.Finally, select Project|WebDeploy to deploy the client application as an active form.

Any Web browser that can run Active forms can run your client application by specifying the .HTM file that was created when you deployed the client application. This .HTM file has the same name as your client application project, and appears in the directory specified as the Target directory.

Creating the application server

You create an application server very much as you create most database applications. The major difference is that the application server includes a dataset provider.
To create an application server, start a new project, save it, and follow these steps:

1.Add a new remote data module to the project. From the main menu, choose File|New. Choose the Multitier page in the new items dialog, and select

Remote Data Module if you are creating a COM Automation server that clients access using DCOM, HTTP, or sockets.
Transactional Data Module if you are creating a remote data module that runs under MTS or COM+. Connections can be formed using DCOM, HTTP, or sockets. However, only DCOM supports the security services.

NB:When you add a remote data module to your project, the Wizard also creates a special COM Automation object that contains a reference to the remote data module and uses it to look for providers. This object is called the implementation object.

2.Place the appropriate dataset components on the data module and set them up to access the database server.

3.Place a TDataSetProvider component on the data module for each dataset. This provider is required for brokering client requests and packaging data.

4.Set the DataSet property for each provider component to the name of the dataset to access. There are additional properties that you can set for the provider.

5.Write application server code to implement events, shared business rules, shared data validation, and shared security. You may want to extend the application server’s interface to provide additional ways that the client application can call the server.

6.Save, compile, and register or install the application server.
When the application server uses DCOM, HTTP, or sockets as a communication protocol, it acts as an Automation server and must be registered like any other ActiveX or COM server.

If you are using a transactional data module, you do not register the application server. Instead, you install it with MTS or COM+.

7.If your server application does not use DCOM, you must install the runtime software that receives client messages, instantiates the remote data module, and marshals interface calls.

For TCP/IP sockets this is a socket dispatcher application, Scktsrvr.exe.
For HTTP connections this is httpsrvr.dll, an ISAPI/NSAPI DLL that must be installed with your Web server.

Distributing a client application as an ActiveX control

The MIDAS architecture can be combined with C++Builder’s ActiveX features to distribute a MIDAS client application as an ActiveX control.
When you distribute your client application as an ActiveX control,create the application server as you would for any other multi-tiered application.
When creating the client application, you must use an Active Form as the basis instead of an ordinary form.

Once you have built and deployed your client application, it can be accessed from any ActiveX-enabled Web browser on another machine. For a Web browser to successfully launch your client application, the Web server must be running on the machine that has the client application.
If the client application uses DCOM to communicate between the client application and the application server, the machine with the Web browser must be enabled to work with DCOM. If the machine with the Web browser is a Windows 95 machine, it must have installed DCOM95, which is available from Microsoft

Building Web applications using InternetExpress

MIDAS clients can request that the application server provide data packets that are coded in XML instead of OleVariants. By combining XML-coded data packets, specialjavascript libraries of database functions, and C++Builder’s Web server application support, you can create thin client applications that can be accessed using a Web browser that supports javascript. These applications make up C++Builder’s InternetExpress support.

Before building an InternetExpress application,, you should understand C++Builder’s Web server application architecture and the MIDAS database architecture.Understanding MIDAS technology.

On the InternetExpress page of the component palette, you can find a set of components that extend this Web server application architecture to act as a MIDAS client. Using these components, the Web application generates HTML pages that contain a mixture of HTML, XML, and javascript. The HTML governs the layout and appearance of the pages seen by end users in their browsers. The XML encodes the data packets and delta packets that represent database information. The javascript allows the HTML controls to interpret and manipulate the data in these XML data packets.

If the InternetExpress application uses DCOM to connect to the application server, you must take additional steps to ensure that the application servergrants access and launch permissions to its clients.

Tip:You can use the components on the InternetExpress page to build Web server applications with “live” data even if you do not have an application server. Simply add the provider and its dataset to the Web module.

Distributing a client application as an ActiveX control

The MIDAS architecture can be combined with C++Builder’s ActiveX features to distribute a MIDAS client application as an ActiveX control.
When you distribute your client application as an ActiveX control, create the application server as you would for any other multi-tiered application.

When creating the client application, you must use an Active Form as the basis instead of an ordinary form.
Once you have built and deployed your client application, it can be accessed from any ActiveX-enabled Web browser on another machine. For a Web browser to successfully launch your client application, the Web server must be running on the machine that has the client application.
If the client application uses DCOM to communicate between the client application and the application server, the machine with the Web browser must be enabled to work with DCOM. If the machine with the Web browser is a Windows 95 machine, it must have installed DCOM95, which is available from Microsoft.

Writing MIDAS Web applications

If you want to create Web-based clients for your multi-tiered database application, you must replace the client tier with a special Web applications that acts simultaneously as a client to the application server and as a Web server application that is installed with a Web server on the same machine.
There are two approaches that you can take to build the MIDAS Web application:

You can combine the MIDAS architecture with C++Builder’s ActiveX support to distribute a MIDAS client application as an ActiveX control.
This allows any browser that supports ActiveX to run your client application as an in-process server.
You can use XML data packets to build an InternetExpress application. This allows browsers that supports javascript to interact with your client application through html pages.

These two approaches are very different. Which one you choose depends on the following considerations:

Each approach relies on a different technology (ActiveX vs. javascript and XML). Consider what systems your end users will use. The first approach requires a browser to support ActiveX (which limits clients to a Windows platform). The second approach requires a browser to support javascript and the DHTML capabilities introduced by Netscape 4 and Internet Explorer 4.
ActiveX controls must be downloaded to the browser to act as an in-process server. As a result, the clients using an ActiveX approach require much more memory than the clients of an HTML-based application.

The InternetExpress approach can be integrated with other HTML pages. An ActiveX client must run in a separate window.
The InternetExpress approach uses standard HTTP, thereby avoiding any firewall issues that confront an ActiveX application.
The ActiveX approach provides greater flexibility in how you program your application. You are not limited by the capabilities of the javascript libraries. The client datasets used in the ActiveX approach surface more features (such as filters, ranges, aggregation, optional parameters, delayed fetching of BLOBs or nested details, and so on) than the XML brokers used in the InternetExpress approach.

Requesting data from an application server

The following table lists the properties and methods of TClientDataSet that determine how data is fetched from an application server in a multi-tiered application:
FetchOnDemand property
Determines whether or not a client dataset automatically fetches data as needed, or relies on the application to call the client dataset’s GetNextPacket, FetchBlobs, and FetchDetails functions to retrieve additional data.

PacketRecords property
Specifies the type or number of records to return in each data packet.

GetNextPacket method
Fetches the next data packet from the application server.

FetchBlobs method
Fetches any BLOB fields for the current record when the application server does not include BLOB data automatically.

FetchDetails method
Fetches nested detail datasets for the current record when the application server does not include these in data packets automatically.
By default, a client dataset retrieves all records from the application server. You can control how data is retrieved using PacketRecords and FetchOnDemand.
PacketRecords specifies either how many records to fetch at a time, or the type of records to return. By default, PacketRecords is set to -1, which means that all available records are fetched at once, either when the application is first opened, or the application explicitly calls GetNextPacket. When PacketRecords is -1, then after it first fetches data, a client dataset never needs to fetch more data because it already has all available records.

To fetch records in small batches, set PacketRecords to the number of records to fetch. For example, the following statement sets the size of each data packet to ten records:

ClientDataSet1->PacketRecords = 10;

This process of fetching records in batches is called “incremental fetching”. Client datasets use incremental fetching when PacketRecords is greater than zero. By default, the client dataset calls GetNextPacket to fetch data as needed. Newly fetched packets are appended to the end of the data already in the client dataset.
GetNextPacket returns the number of records it fetches. If the return value is the same as PacketRecords, the end of available records was not encountered. If the return value is greater than 0 but less than PacketRecords, the last record was reached during the fetch operation. If GetNextPacket returns 0, then there are no more records to fetch.

Note:Incremental fetching only works if the remote data module preserves state information. That is, you must not be using MTS, and the remote data module must be configured so that each client application has its own data module instance.
You can also use PacketRecords to fetch metadata information about a database from the application server. To retrieve metadata information, set PacketRecords to 0.
Automatic fetching of records is controlled by the FetchOnDemand property. When FetchOnDemand is true (the default), automatic fetching is enabled. To prevent automatic fetching of records as needed, set FetchOnDemand to false. When FetchOnDemand is false, the application must explicitly call GetNextPacket to fetch records.

Applications that need to represent extremely large read-only datasets can turn off FetchOnDemand to ensure that the client datasets do not try to load more data than can fit into memory. Between fetches, the client dataset frees its cache using the EmptyDataSet method. This approach, however, does not work well when the client must post updates to the application server.

Supporting state information in remote data modules

The IAppServer interface, which controls all communication between client datasets and providers on the application server, is mostly stateless. When an application is stateless, it does not “remember” anything that happened in previous calls by the client. This stateless quality is useful if you are pooling database connections in a transactional data module, because your application server does not need to distinguish between database connections for persistent information such as record currency. Similarly, this stateless quality is important when you are sharing remote data module instances between many clients, as occurs with just-in-time activation or object pooling.

However, there are times when you want to maintain state information between calls to the application server. For example, when requesting data using incremental fetching,the provider on the application server must “remember” information from previous calls (the current record).
This is not a problem if the remote data module is configured so that each client has its own instance. When each client has its own instance of the remote data module, there are no other clients to change the state of the data module between client calls.

However, it is reasonable to want the benefits of sharing remote data module instances while still managing persistent state information. For example, you may need to use incremental fetching to display a dataset that is too large to fit in memory at one time.
Before and after any calls to the IAppServer interface that the client dataset sends to the application server (AS_
ApplyUpdates, AS_Execute, AS_GetParams, AS_GetRecords, or AS_RowRequest), it receives an event where it can send or retrieve custom state information. Similarly, before and after providers respond to these client-generated calls, they receive events where they can retrieve or send custom state information. Using this mechanism, you can communicate persistent state information between client applications and the application server, even if the application server is stateless. For example, to enable incremental fetching in a stateless application server, you can do the following:

Use the client dataset’s BeforeGetRecords event to send the key value of the last record to the application server:

TDataModule1::ClientDataSet1BeforeGetRecords(TObject *Sender; OleVariant &OwnerData)

{
TClientDataSet *pDS = (TClientDataSet *)Sender;
if (!pDS->Active)
return;
void *CurRecord = pDS->GetBookmark(); // save current record
try
{
// locate the last record in the current packet. Note this only works if FetchOnDemand
// is False. If FetchOnDemand is True, you can save the key value of the last record
// fetch in an AfterGetRecords event handler and use that instead
pDS->Last(); // locate the last record in the new packet

OwnerData = pDS->FieldValues["Key"]; // Send key value for the last record to app server
pDS->GotoBookmark(CurRecord); // return to current record
}
__finally
{
pDS->FreeBookmark(CurRecord);
}
}

On the server, use the provider’s BeforeGetRecords event to locate the appropriate set of records:

TRemoteDataModule1::Provider1BeforeGetRecords(TObject *Sender, OleVariant &OwnerData)

{
TLocateOptions opts;
if (!VarIsEmpty(OwnerData))
{
TDataSet *pDS = ((TDataSetProvider *)Sender)->DataSet;
if (pDS->Locate("Key", OwnerData, opts))
pDS->Next;
}
incremental fetching

Applying updates for master/detail tables

When you apply updates for master/detail tables, the order in which you list datasets to update is significant. Generally you should always update master tables before detail tables, except when handling deleted records. In complex master/detail relationships where the detail table for one relationship is the master table for another detail table, the same rule applies.
You can update master/detail tables at the database or dataset component levels. For purposes of control (and of creating explicitly self-documented code), you should apply updates at the dataset level. The following example illustrates how you should code cached updates to two tables, Master and Detail, involved in a master/detail relationship:

Database1->StartTransaction();

try
{
Master->ApplyUpdates();
Detail->ApplyUpdates();
Database1->Commit();
}
catch(...)
{
Database1->Rollback();
throw;
}
Master->CommitUpdates();
Detail->CommitUpdates();

If an error occurs during the application of updates, this code also leaves both the cache and the underlying data in the database tables in the same state they were in before the calls to ApplyUpdates.
If an exception is thrown during the call to Master->ApplyUpdates, it is handled like the single dataset case previously described. Suppose, however, that the call to Master->ApplyUpdates succeeds, and the subsequent call to Detail->ApplyUpdates fails. In this case, the changes are already applied to the master table. Because all data is updated inside a database transaction, however, even the changes to the master table are rolled back when Database1->Rollback is called in the catch block. Furthermore, Master->CommitUpdates is not called because the exception which is rethrown causes that code to be skipped, so the cache is also left in the state it was before the attempt to update.

To appreciate the value of the two-phase update process, assume for a moment that ApplyUpdates is a single-phase process which updates the data and the cache. If this were the case, and if there were an error while applying the updates to the Detail table, then there would be no way to restore both the data and the cache to their original states. Even though the call to Database1->Rollback would restore the database, there would be no way to restore the cache.

Limiting records with parameters

When the provider on the application server represents the results of a table component, you can use Params property to limit the records that are provided to the Data property.
Each parameter name must match the name of a field in the TTable component on the application server. The provider component on the application server sends only those records whose values on the corresponding fields match the values assigned to the parameters.

For example, consider a client application that displays the orders for a single customer. When the user identifies the customer, the client dataset sets its Params property to include a single parameter named CustID (or whatever field in the server table is called) whose value identifies the customer whose orders it will display. When the client dataset requests data from the application server, it passes this parameter value. The application server then sends only the records for the identified customer. This is more efficient than letting the application server send all the orders records to the client application and then filtering the records on the client side.

Master/detail relationships: Two Drawbacks

1.The detail table must fetch and store all of its records from the application server even though it only uses one detail set at a time. This problem can be mitigated by using parameters.Limiting records with parameters
2.It is very difficult to apply updates, because client datasets apply updates at the dataset level and master/detail updates span multiple datasets. Even in a two-tiered environment, where you can use the database to apply updates for multiple tables in a single transaction, applying updates in master/detail forms is tricky.Applying updates for master/detail tables

In multi-tiered applications, you can avoid these problems by using nested tables to represent the master/detail relationship. To do this, set up a master/detail relationship between the tables on the application server. Then set the DataSet property of your provider component to the master table.
When clients call the GetRecords method of the provider, it automatically includes the detail datasets as a DataSet field in the records of the data packet. When clients call the ApplyUpdates method of the provider, it automatically handles applying updates in the proper order

Building an example master/detail form

The following steps create a simple form in which a user can scroll through customer records and display all orders for the current customer. The master table is the CustomersTable table, and the detail table is OrdersTable.

1 Place two TTable and two TDataSource components in a data module.
2 Set the properties of the first TTable component as follows:

DatabaseName: BCDEMOS
TableName: CUSTOMER
Name: CustomersTable

3 Set the properties of the second TTable component as follows:

DatabaseName: BCDEMOS
TableName: ORDERS
Name: OrdersTable

4 Set the properties of the first TDataSource component as follows:

Name: CustSource
DataSet: CustomersTable

5 Set the properties of the second TDataSource component as follows:

Name: OrdersSource
DataSet: OrdersTable

6 Place two TDBGrid components on a form.
7 Choose File|Include Unit Hdr to specify that the form should use the data module.
8 Set the DataSource property of the first grid component to
“CustSource”, and set the DataSource property of the second grid to “OrdersSource”.
9 Set the MasterSource property of OrdersTable to “CustSource”. This links the CUSTOMER table (the master table) to the ORDERS table (the detail table).

10 Double-click the MasterFields property value box in the Object Inspector to invoke the Field Link Designer to set the following properties:

In the Available Indexes field, choose CustNo to link the two tables by the CustNo field.
Select CustNo in both the Detail Fields and Master Fields field lists.
Click the Add button to add this join condition. In the Joined Fields list,
“CustNo -> CustNo” appears.
Choose OK to commit your selections and exit the Field Link Designer.

11 Set the Active properties of CustomersTable and OrdersTable to true to display data in the grids on the form.
12 Compile and run the application.

If you run the application now, you will see that the tables are linked together, and that when you move to a new record in the CUSTOMER table, you see only those records in the ORDERS table that belong to the current customer.

Supporting master/detail relationships

You can create master/detail relationships between client datasets in your client application in the same way you set up master/detail forms in one- and two-tiered applications.
Creating master/detail forms
A table component’s MasterSource and MasterFields properties can be used to establish one-to-many relationships between two tables.
The MasterSource property is used to specify a data source from which the table will get data for the master table. For instance, if you link two tables in a master/detail relationship, then the detail table can track the events occurring in the master table by specifying the master table’s data source component in this property.

The MasterFields property specifies the column(s) common to both tables used to establish the link. To link tables based on multiple column names, use a semicolon delimited list:

Table1->MasterFields = "OrderNo;ItemNo";

To help create meaningful links between two tables, you can use the Field Link designer.
example of a master/detail formBuilding an Example master/detail form

Managing transactions in multi-tiered applications

When client applications apply updates to the application server, the provider component automatically wraps the process of applying updates and resolving errors in a transaction. This transaction is committed if the number of problem records does not exceed the MaxErrors value specified as an argument to the ApplyUpdates method. Otherwise, it is rolled back.
In addition, you can add transaction support to your server application by adding a database component or using passthrough SQL. This works the same way that you would manage transactions in a two-tiered application.

If you have a transactional data module, you can broaden your transaction support by using MTS or COM+ transactions. These transactions can include any of the business logic on your application server, not just the database access. In addition, because they support two-phase commits, they can span multiple databases.
Only the BDE- and ADO-based data access components support two-phase commit. Do not use InterbaseExpress components if you want to have transactions that span multiple databases.
Important Note:
When using the BDE, two-phase commit is fully implemented only on Oracle7 and MS-SQL databases. If your transaction involves multiple databases, and some of them are remote servers other than Oracle7 or MS-SQL, your transaction runs a small risk of only partially succeeding. Within any one database, however, you will always have transaction support.

By default, all IAppServer calls on a transactional data module are transactional. You need only set the transaction attribute of your data module to indicate that it must participate in transactions. In addition, you can extend the application server’s interface to include method calls that encapsulate transactions that you define.

If your transaction attribute indicates that the application server requires a transaction, then every time a client calls a method on its interface, it is automatically wrapped in a transaction. All client calls to your application server are then enlisted in that transaction until you indicate that the transaction is complete. These calls either succeed as a whole or are rolled back.

Note:Do not combine MTS or COM+ transactions with explicit transactions created by a database or ADO connection component or using passthrough SQL. When your transactional data module is enlisted in a transaction, it automatically enlists all of your database calls in the transaction as well.

Transactions

A transaction is a group of actions that must all be carried out successfully on one or more tables in a database before they are committed (made permanent). If any of the actions in the group fails, then all actions are rolled back (undone).
Transactions protect against hardware failures that occur in the middle of a database command or set of commands. They also form the basis of multi-user concurrency control on SQL servers. When each user interacts with the database only through transactions, one user’s commands can’t disrupt the unity of another user’s transaction. Instead, the SQL server schedules incoming transactions, which either succeed as a whole or fail as a whole.

Although transaction support is not part of most local databases, the BDE drivers provide limited transaction support for some of these databases. For SQL servers and ODBC-compliant databases, the database transaction support is provided by the component that represents the database connection. In multi-tiered applications, you can create transactions that include actions other than database operations or that span multiple databases.

Using transactions
A transaction is a group of actions that must all be carried out successfully on one or more tables in a database before they are committed (made permanent). If one of the actions in the group fails, then all actions are rolled back (undone). By using transactions, you ensure that the database is not left in an inconsistent state when a problem occurs completing one of the actions that make up the transaction.
For example, in a banking application, transferring funds from one account to another is an operation you would want to protect with a transaction. If, after decrementing the balance in one account, an error occurred incrementing the balance in the other, you want to roll back the transaction so that the database still reflects the correct total balance.

By default, the BDE provides implicit transaction control for your applications. When an application is under implicit transaction control, a separate transaction is used for each record in a dataset that is written to the underlying database. Implicit transactions guarantee both a minimum of record update conflicts and a consistent view of the database. On the other hand, because each row of data written to a database takes place in its own transaction, implicit transaction control can lead to excessive network traffic and slower application performance. Also, implicit transaction control will not protect logical operations that span more than one record, such as the transfer of funds described previously.

If you explicitly control transactions, you can choose the most effective times to start, commit, and roll back your transactions. When you develop applications in a multi-user environment, particularly when your applications run against a remote SQL server, you should control transactions explicitly.

Working with (connection) transactions

The TADOConnection component includes a number of methods and events for working with transactions. These transaction capabilities are shared by all of the ADO command and dataset components using the data store connection

1.Using transaction methods
Use the methods BeginTrans, CommitTrans, and RollbackTrans to perform transaction processing. BeginTrans starts a transaction in the data store associated with the ADO connection component. CommitTrans commits a currently active transaction, saving changes to the database and ending the transaction. RollbackTrans cancels a currently active transaction, abandoning all changes made during the transaction and ending the transaction. Read the InTransaction property to determine at any given point whether the connection component has a transaction open.

A transaction started by the connection component is shared by all command and dataset components that use the connection established by the TADOConnection component.
2.Using transaction events
The ADO connection component provides a number of events for detecting when transaction-related processes have been completed. These events indicate when a transaction process initiated by a BeginTrans, CommitTrans, and RollbackTrans method have been successfully completed at the data store.
The OnBeginTransComplete event is triggered when the data store has successfully started a transaction after a call to the connection component’s BeginTrans method. The OnCommitTransComplete event is triggered after a transaction is successfully committed due to a call to CommitTrans. And OnRollbackTransComplete is triggered after a transaction is successfully committed due to a call to RollbackTrans.

Database security

Databases often contain sensitive information. Different databases provide security schemes for protecting that information. Some databases, such as Paradox and dBASE, only provide security at the table or field level. When users try to access protected tables, they are required to provide a password. Once users have been authenticated, they can see only those fields (columns) for which they have permission.
Most SQL servers require a password and user name to use the database server at all. Once the user has logged in to the database, that username and password determine which tables can be used. For information on providing passwords to SQL servers when using the BDE, see Controlling server login. For information on providing this information when using ActiveX Data Objects (ADO), see Controlling the connection login. For information on providing this information when using the InterBase direct access components, see the OnLogin event of TIBDatabase.

When designing database applications, you must consider what type of authentication is required by your database server. If you do not want your users to have to provide a password, you must either use a database that does not require one or you must provide the password and username to the server programmatically. When providing the password programmatically, care must be taken that security can’t be breached by reading the password from the application.
If you are requiring your user to supply a password, you must consider when the password is required. If you are using a local database but intend to scale up to a larger SQL server later, you may want to prompt for the password before you access the table, even though it is not required until then.

If your application requires multiple passwords because you must log in to several protected systems or databases, you can have your users provide a single master password which is used to access a table of passwords required by the protected systems. The application then supplies passwords programmatically, without requiring the user to provide multiple passwords.
In multi-tiered applications, you may want to use a different security model altogether. You can use HTTPs or MTS to control access to middle tiers, and let the middle tiers handle all details of logging into database servers.

Types of databases

You can connect to different types of databases, depending on what drivers you have installed with the Borland Database Engine (BDE) or ActiveX Data Objects (ADO).
These drivers may connect your application to local databases such as Paradox, Access, and dBASE or remote database servers like Microsoft SQL Server, Oracle, and Informix. Similarly, the InterBase Express components can access either a local or remote version of InterBase.

NB: Different versions of C++Builder come with the components that use these drivers (BDE or ADO), or with the InterBase Express components.

Choosing what type of database to use depends on several factors. Your data may already be stored in an existing database. If you are creating the tables of information your application uses, you may want to consider the following questions.

How much data will the tables hold?
How many users will be sharing these tables?
What type of performance (speed) do you require from the database?

Local databases
Local databases reside on your local drive or on a local area network. They have proprietary APIs for accessing the data. Often, they are dedicated to a single system. When they are shared by several users, they use file-based locking mechanisms. Because of this, they are sometimes called file-based databases.
Local databases can be faster than remote database servers because they often reside on the same system as the database application.
Because they are file-based, local databases are more limited than remote database servers in the amount of data they can store. Therefore, in deciding whether to use a local database, you must consider how much data the tables are expected to hold.

Applications that use local databases are called single-tiered applications because the application and the database share a single file system.
Examples of local databases include Paradox, dBASE, FoxPro, and Access.

Remote database servers
Remote database servers usually reside on a remote machine. They use Structured Query Language (SQL) to enable clients to access the data. Because of this, they are sometimes called SQL servers. (Another name is Remote Database Management system, or RDBMS.) In addition to the common commands that make up SQL, most remote database servers support a unique “dialect” of SQL.
Remote database servers are designed for access by several users at the same time. Instead of a file-based locking system such as those employed by local databases, they provide more sophisticated multi-user support, based on transactions.

Remote database servers hold more data than local databases. Sometimes, the data from a remote database server does not even reside on a single machine, but is distributed over several servers.
Applications that use remote database servers are called two-tiered applications or multi-tiered applications because the application and the database operate on independent systems (or tiers).
Examples of SQL servers include InterBase, Oracle, Sybase, Informix, Microsoft SQL server, and DB2.

Designing database applications

Database applications allow users to interact with information that is stored in databases. Databases provide structure for the information, and allow it to be shared among different applications.
C++Builder provides support for relational database applications. Relational databases organize information into tables, which contain rows (records) and columns (fields). These tables can be manipulated by simple operations known as the relational calculus.
When designing a database application, you must understand how the data is structured. Based on that structure, you can then design a user interface to display data to the user and allow the user to enter new information or modify existing data.

Using databases
The components on the Data Access page, the ADO page, or the InterBase page of the Component palette allow your application to read from and write to databases. The components on the Data Access page use the Borland Database Engine (BDE) to access database information which they make available to the data-aware controls in your user interface. The ADOExpress components on the ADO page use ActiveX Data Objects (ADO) to access the database information through OLEDB. The InterBase Express components on the InterBase page access an InterBase database directly.

Depending on your version of C++Builder, the BDE includes drivers for different types of databases. While all types of databases contain tables which store information, different types support additional features such as

Database security
Transactions
Data dictionary
Referential integrity, stored procedures, and triggers

Monday, April 20, 2009

The TPersistent Branch

Directly below TObject in the VCL hierarchy is TPersistent. TPersistent adds two very important methods to all classes based on it—SaveToStream and LoadFromStream. These methods supply persistence to objects.
For example, when the form designer needs to create a DFM file (a file used to store information about the components on the form), it loops through its components array and calls SaveToStream for all the components on the form. Each component “knows” how to write its changed properties out to a stream (in this case, a text file). Conversely, when the form designer needs to load the properties for components from the DFM file, it loops through the components array and calls LoadFromStream for each component. Thus, any class derived from TPersistent has the ability to save its state information and restore it on demand.

The types of classes in this branch include:

TGraphicsObject, an abstract base class for objects which encapsulate Windows graphics objects: TBrush, TFont, and TPen.
TGraphic, an abstract base class type for objects such as icons, bitmaps, and metafiles that can store and display visual images: TBitmap, TIcon, and TMetafile.
TStrings, a base class for objects that represent a list of strings.
TClipboard, a wrapper for the Windows clipboard, which contains text or graphics that have been cut or copied from an application.

TCollection, TOwnedCollection, and TCollectionItem, maintained indexed collections of specially defined items.

The TObject Branch

All VCL objects descend from TObject, an abstract class whose methods define fundamental behavior like construction, destruction, and message handling. Much of the powerful capability of VCL objects are established by the methods that TObject introduces. TObject encapsulates the fundamental behavior common to all objects in the VCL, by introducing methods that provide:

The ability to respond when objects are created or destroyed.
Class type and instance information on an object, and runtime type information (RTTI) about its published properties.
Support for message-handling.

TObject is the immediate ancestor of many simple classes. Classes that are contained within this branch have one common, important characteristic, they are transitory. What this means, is that these classes do not have a method to save the state that they are in prior to destruction, they are not persistent.
One of the main groups of classes in this branch is the Exception class. This class provides a large set of built-in exception classes for automatically handling divide-by-zero errors, file I/O errors, invalid typecasts, and many other exception conditions.

Another type of group in the TObject branch are classes that encapsulate data structures, such as:

TBits, a class that stores an “array” of Boolean values
TList, a linked list class
TStack, a class that maintains a last-in first-out array of pointers
TQueue, a class that maintains a first-in first-out array of pointers

You can also find wrappers for external objects like TPrinter, which encapsulates the Windows printer interface, and TRegistry, a low-level wrapper for the system registry and functions that operate on the registry.
TStream is good example of another type of class in this branch. TStream is the base class type for stream objects that can read from or write to various kinds of storage media, such as disk files, dynamic memory, and so on.
So you can see, this branch includes many different types of classes that are very useful to you as a developer.

Types of events

The kinds of events that can occur can be divided into two main categories:

User events
System events

Regardless of how the event was called, C++Builder looks to see if you have assigned any code to handle that event. If you have, then that code is executed; otherwise, nothing is done.

User events

User events are actions that are initiated by the user. Examples of user events are OnClick (the user clicked the mouse), OnKeyPress (the user pressed a key on the keyboard), and OnDblClick (the user double-clicked a mouse button). These events are always tied to a user's actions.

System events

System events are events that the operating system fires for you. For example, the OnTimer event (the Timer component issues one of these events whenever a predefined interval has elapsed), the OnCreate event (the component is being created), the OnPaint event (a component or window needs to be redrawn), etc. Usually, system events are not directly initiated by a user action.

Understanding the VCL

The Visual Component Library (VCL) is based on the properties, methods, and events (PME) model. The PME model defines the data members (properties), the functions that operate on the data (methods), and a way to interact with users of the class (events). The VCL is a hierarchy of objects, written in Object Pascal and tied to the C++Builder IDE, that allows you to develop applications quickly. Using C++Builder’s Component palette and Object Inspector, you can place VCL components on forms and specify their properties without writing code.

Properties

Properties are characteristics of components. You can see and change properties at design time and get immediate feedback as the components react in the IDE. Well-designed properties make your components easier for others to use and easier for you to maintain.

Methods

Methods are functions that are members of a class. Class methods can access all the public, protected and private properties and data members of the class and are commonly referred to as member functions.

Events

Event driven programming (EDP) means just that—programming by responding to events. In essence, event driven means that the program does not restrict what the user can do next. For example, in a Windows program, the programmer has no way of knowing the sequence of actions the user will perform next. They may pick a menu item, click a button, or mark some text. So, EDP means that you write code to handle whatever events occur that you're interested in, rather than write code that always executes in the same restricted order.

The integrated development environment

When you start C++Builder, you are immediately placed within the integrated development environment, also called the IDE. This environment provides all the tools you need to design, develop, test, debug, and deploy applications.

C++Builder’s development environment includes a visual form designer, Object Inspector, Component palette, Project Manager, source code editor, debugger, and installation tool. You can move freely from the visual representation of an object (in the form designer), to the Object Inspector to edit the initial runtime state of the object, to the source code editor to edit the execution logic of the object. Changing code-related properties, such as the name of an event handler, in the Object Inspector automatically changes the corresponding source code. In addition, changes to the source code, such as renaming an event handler method in a form class declaration, is immediately reflected in the Object Inspector.

Designing applications
C++Builder includes all the tools necessary to start designing applications:

A blank window, known as a form, on which to design the UI for your application.
An extensive class library with many reusable objects.
An Object Inspector for examining and changing object traits.
A Code editor that provides direct access to the underlying program logic.
A Project Manager for managing the files that make up one or more projects.
Many other tools such as an image editor on the toolbar and an integrated debugger on menus to support application development in the IDE.

Command-line tools including compilers, linkers, and other utilities.

You can use C++Builder to design any kind of 32-bit Windows application—from general-purpose utilities to sophisticated data access programs or distributed applications. C++Builder’s database tools and data-aware components let you quickly develop powerful desktop database and client/server applications. Using C++Builder’s data-aware controls, you can view live data while you design your application and immediately see the results of database queries and changes to the application interface.

Monday, April 6, 2009

Using the const Keyword in C++ Programs

C++ extends const to include classes and member functions. In a C++ class definition, use the const modifier following a member function declaration. The member function is prevented from modifying any data in the class.

A class object defined with the const keyword attempts to use only member functions that are also defined with const. If you call a member function that is not defined as const, the compiler issues a warning that a non-const function is being called for a const object. Using the const keyword in this manner is a safety feature of C++.

Warning: A pointer can indirectly modify a const variable, as in the following:

*(int *)&maxint = 35;

If you use the const modifier with a pointer parameter in a function's parameter list, the function cannot modify the variable that the pointer points to. For example,

int printf (const char *format, ...);

printf is prevented from modifying the format string.
volatile
Syntax

volatile ;

Description

Use the volatile modifier to indicate that a variable can be changed by a background routine, an interrupt routine, or an I/O port. Declaring an object to be volatile warns the compiler not to make assumptions concerning the value of the object while evaluating expressions in which it occurs because the value could change at any moment. It also prevents the compiler from making the variable a register variable

volatile int ticks;

void timer( ) {
ticks++;
}
void wait (int interval) {
ticks = 0;
while (ticks < interval); // Do nothing

}

The routines in this example (assuming timer has been properly associated with a hardware clock interrupt) implement a timed wait of ticks specified by the argument interval. A highly optimizing compiler might not load the value of ticks inside the test of the while loop since the loop doesn’t change the value of ticks.

Note: C++ extends volatile to include classes and member functions. If you’ve declared a volatile object, you can use only its volatile member functions.

pascal, _pascal, __pascal
Syntax

pascal ;

_pascal ;

__pascal ;

Description

Use the pascal, _pascal, and __pascal keywords to declare a variable or a function using a Pascal-style naming convention (the name is in uppercase).

In addition, pascal declares Pascal-style parameter-passing conventions when applied to a function header (parameters pushed left to right; the called function cleans up the stack).

In C++ programs, functions declared with the pascal modifer will still be mangled.

_stdcall, __stdcall
Syntax

__stdcall

_stdcall

Description

The _stdcall and __stdcall keywords force the compiler to generate function calls using the Standard calling convention. Functions must pass the correct number and type of arguments; this is unlike normal C use, which permits a variable number of function arguments. Such functions comply with the standard WIN32 argument-passing convention.
_fastcall, __fastcall
Syntax

return-value _fastcall function-name(parm-list)

return-value __fastcall function-name(parm-list)

Description

Use the __fastcall modifier to declare functions that expect parameters to be passed in registers. The first three parameters are passed (from left to right) in EAX, EDX, and ECX, if they fit in the register. The registers are not used if the parameter is a floating-point or struct type.

The compiler treats this calling convention as a new language specifier, along the lines of _cdecl and _pascal

Functions declared using _cdecl or _pascal cannot also have the _fastcall modifiers because they use the stack to pass parameters.

The compiler prefixes the __fastcall function name with an at-sign ("@"). This prefix applies to both unmangled C function names and to mangled C++ function names.
__thread, multithread variables
Category

C++Builder keyword extensions

Description

The keyword __thread is used in multithread programs to preserve a unique copy of global and static class variables. Each program thread maintains a private copy of a __thread variable for each thread.

The syntax is Type __thread variable__name. For example

int __thread x;

declares an integer type variable that will be global but private to each thread in the program in which the statement occurs.

Function modifiers
This section presents descriptions of the C++Builder function modifiers

You can use the __declspec(dllexport), and __declspec(dllimport) modifiers to modify functions.

In 32-bit programs the keyword can be applied to class, function, and variable declarations

The __declspec(dllexport) modifier makes the function exportable from Windows. The __declspec(dllimport) modifier makes a function available to a Windows program. The keywords are used in an executable (if you don't use smart callbacks) or in a DLL.

Functions declared with the __fastcall modifier have different names than their non-__fastcall counterparts. The compiler prefixes the __fastcall function name with an @. This prefix applies to both unmangled C function names and to mangled C++ function names.

C++Builder modifiers

Modifier Use with Description

const1 Variables Prevents changes to object.
volatile1 Variables Prevents register allocation and some optimization. Warns compiler that object might be subject to outside change during evaluation.
__cdecl2 Functions Forces C argument-passing convention. Affects linker and link-time names.
__cdecl2 Variables Forces global identifier case-sensitivity and leading underscores in C.
__pascal Function Forces Pascal argument-passing convention. Affects linker and link- time names.

__pascal Variables Forces global identifier case-insensitivity with no leading underscores in C.
__import Functions/classes Tells the compiler which functions or classes to import.
__export Functions/classes Tells the compiler which functions or classes to export.
__declspec(dllimport) Functions/classes Tells the compiler which functions or classes to import. This is the preferred method.
__declspec(dllexport) Functions/classes Tells the compiler which functions or classes to export. This is the preferred method.

__fastcall Functions Forces register parameter passing convention. Affects the linker and link-time names.
__stdcall Function Forces the standard WIN32 argument-passing convention.

1. C++ extends const and volatile to include classes and member functions.
2. This is the default.

Variable modifiers Mixed-language calling conventions

C++Builder allows your programs to easily call routines written in other languages, and vice versa. When you mix languages , you have to deal with two important issues: identifiers and parameter passing.

By default, C++Builder saves all global identifiers in their original case (lower, upper, or mixed) with an underscore "_" prepended to the front of the identifier. To remove the default, you can use the -u command-line option.

he following table summarizes the effects of a modifier applied to a called function. For every modifier, the table shows the order in which the function parameters are pushed on the stack. Next, the table shows whether the calling program (the caller) or the called function (the callee) is responsible for popping the parameters off the stack. Finally, the table shows the effect on the name of a global function.

Calling conventions

Modifier Push parameters Pop parameters Name change (only in C)

__cdecl1 Right to left Caller '_' prepended
__fastcall Left to right Callee '@' prepended
__pascal Left to right Callee Uppercase
__stdcall Right to left Callee No change

1. This is the default.

Note: __fastcall and __stdcall are always name mangled in C++.

const
Syntax

const ‘variable name’ [ = ‘value’ ] ;

‘function name’ ( const ‘type’*’variable name’ ;)

‘function name’ const;

Description

Use the const modifier to make a variable value unmodifiable.

Use the const modifier to assign an initial value to a variable that cannot be changed by the program. Any future assignments to a const result in a compiler error.

A const pointer cannot be modified, though the object to which it points can be changed. Consider the following examples.

const float pi = 3.14;

const maxint = 12345; // When used by itself, const is equivalent to int.
char *const str1 = "Hello, world"; // A constant pointer

char const *str2 = "Borland International"; // A pointer to a constant character string.

Given these declarations, the following statements are illegal.

pi = 3.0; // Assigns a value to a const.

i = maxint++; // Increments a const.

str1 = "Hi, there!" // Points str1 to something else.

Declarations and declarators

A declaration is a list of names. The names are sometimes referred to as declarators or identifiers. The declaration begins with optional storage class specifiers, type specifiers, and other modifiers. The identifiers are separated by commas and the list is terminated by a semicolon.

Simple declarations of variable identifiers have the following pattern:

data-type var1” <=init1>, var2 <=init2>, ...;”

where var1, var2,... are any sequence of distinct identifiers with optional initializers. Each of the variables is declared to be of type data-type. For example,

int x = 1, y = 2;

creates two integer variables called x and y (and initializes them to the values 1 and 2, respectively).

These are all defining declarations; storage is allocated and any optional initializers are applied.

The initializer for an automatic object can be any legal expression that evaluates to an assignment-compatible value for the type of the variable involved. Initializers for static objects must be constants or constant expressions.

In C++, an initializer for a static object can be any expression involving constants and previously declared variables and functions

The format of the declarator indicates how the declared name is to be interpreted when used in an expression. If type is any type, and storage class specifier is any storage class specifier, and if D1 and D2 are any two declarators, then the declaration

storage-class-specifier type D1, D2;

indicates that each occurrence of D1 or D2 in an expression will be treated as an object of type type and storage class storage class specifier. The type of the name embedded in the declarator will be some phrase containing type, such as "type
," "pointer to type," "array of type," "function returning type," or "pointer to function returning type," and so on.

For example, in Declaration syntax examples each of the declarators could be used as rvalues (or possibly lvalues in some cases) in expressions where a single int object would be appropriate. The types of the embedded identifiers are derived from their declarators as follows

Declaration syntax examples

Declarator syntax Implied type of name Example

type name; type int count;
type name[]; (open) array of type int count[];
type name[3]; Fixed array of three elements, int count[3];
all of type (name[0], name[1], and name[2]
type *name; Pointer to type int *count;
type *name[]; (open) array of pointers to type int *count[];
type *(name[]); Same as above int *(count[]);

type (*name)[]; Pointer to an (open) array of type int (*count) [];
type &name; Reference to type (C++ only) int &count;
type name(); Function returning type int count();
type *name(); Function returning pointer to type int *count();
type *(name()); Same as above int *(count());
type (*name)(); Pointer to function returning type int (*count) ();
Storage class specifiers
Storage classes specifiers are also called type specifiers. They dictate the location (data segment, register, heap, or stack) of an object and its duration or lifetime (the entire running time of the program, or during execution of some blocks of code). Storage class can be established by the declaration syntax, by its placement in the source code, or by both of these factors.
The keyword mutable does not affect the lifetime of the class member to which it is applied.

The storage class specifiers in C++Builder are:

auto register
__declspec static
extern typedef
mutable

Arrays, structures, and unions

You initialize arrays and structures (at declaration time, if you like) with a brace-enclosed list of initializers for the members or elements of the object in question. The initializers are given in increasing array subscript or member order. You initialize unions with a brace-enclosed initializer for the first member of the union. For example, you could declare an array days, which counts how many times each day of the week appears in a month (assuming that each day will appear at least once), as follows:

int days[7] = { 1, 1, 1, 1, 1, 1, 1 }

The following rules initialize character arrays and wide character arrays:

You can initialize arrays of character type with a literal string, optionally enclosed in braces. Each character in the string, including the null terminator, initializes successive elements in the array. For example, you could declare

char name[] = { "Unknown" };

which sets up an eight-element array, whose elements are 'U' (for name[0]), 'n' (for name[1]), and so on (and including a null terminator).

You can initialize a wide character array (one that is compatible with wchar_t) by using a wide string literal, optionally enclosed in braces. As with character arrays, the codes of the wide string literal initialize successive elements of the array.

Here is an example of a structure initialization:

struct mystruct {

int i;
char str[21];
double d;

} s = { 20, "Borland", 3.141 };

Complex members of a structure, such as arrays or structures, can be initialized with suitable expressions inside nested braces.

Initializers

Initializers set the initial value that is stored in an object (variables, arrays, structures, and so on). If you don't initialize an object, and it has static duration, it will be initialized by default in the following manner:

To zero if it is an arithmetic type
To null if it is a pointer type

Note: If the object has automatic storage duration, its value is indeterminate.

Syntax for initializers

initializer
= expression
= {initializer-list} <,>}
(expression list)
initializer-list
expression
initializer-list, expression
{initializer-list} <,>}

Rules governing initializers

The number of initializers in the initializer list cannot be larger than the number of objects to be initialized.
The item to be initialized must be an object (for example, an array).
For C (not required for C++), all expressions must be constants if they appear in one of these places:

In an initializer for an object that has static duration.
In an initializer list for an array, structure, or union (expressions using sizeof are also allowed).

If a declaration for an identifier has block scope, and the identifier has external or internal linkage, the declaration cannot have an initializer for the identifier.
If a brace-enclosed list has fewer initializers than members of a structure, the remainder of the structure is initialized implicitly in the same way as objects with static storage duration.

Scalar types are initialized with a single expression, which can optionally be enclosed in braces. The initial value of the object is that of the expression; the same constraints for type and conversions apply as for simple assignments.

For unions, a brace-enclosed initializer initializes the member that first appears in the union's declaration list. For structures or unions with automatic storage duration, the initializer must be one of the following:

An initializer list (as described in Arrays, structures, and unions).
A single expression with compatible union or structure type. In this case, the initial value of the object is that of the expression.

The Fundamental Types

The fundamental type specifiers are built from the following keywords:

char __int8 long
double __int16 signed
float __int32 short
int __int64 unsigned

From these keywords you can build the integral and floating-point types, which are together known as the arithmetic types. The modifiers long, short, signed, and unsigned can be applied to the integral types. The include file limits.h contains definitions of the value ranges for all the fundamental types.



Integral types

char, short, int, and long, together with their unsigned variants, are all considered integral data types. Integral types shows the integral type specifiers, with synonyms listed on the same line.

Integral types

char, signed char Synonyms if default char set to signed.
unsigned char
char, unsigned char Synonyms if default char set to unsigned.
signed char
int, signed int
unsigned, unsigned int
short, short int, signed short int
unsigned short, unsigned short int
long, long int, signed long int
unsigned long, unsigned long int

Note: These synonyms are not valid in C++. See The three char types.

signed or unsigned can only be used with char, short, int, or long. The keywords signed and unsigned, when used on their own, mean signed int and unsigned int, respectively.

In the absence of unsigned, signed is assumed for integral types. An exception arises with char. C++Builder lets you set the default for char to be signed or unsigned. (The default, if you don't set it yourself, is signed.) If the default is set to unsigned, then the declaration char ch declares ch as unsigned. You would need to use signed char ch to override the default. Similarly, with a signed default for char, you would need an explicit unsigned char ch to declare an unsigned char.

Only long or short can be used with int. The keywords long and short used on their own mean long int and short int.

ANSI C does not dictate the sizes or internal representations of these types, except to indicate that short, int, and long form a nondecreasing sequence with "short <= int <= long." All three types can legally be the same. This is important if you want to write portable code aimed at other platforms.

In a C++Builder 32-bit program, the types int and long are equivalent, both being 32 bits. The signed varieties are all stored in two's complement format using the most significant bit (MSB) as a sign bit: 0 for positive, 1 for negative (which explains the ranges shown in 32-bit data types, sizes, and ranges). In the unsigned versions, all bits are used to give a range of 0 - (2n - 1), where n is 8, 16, or 32.

Floating-point types

The representations and sets of values for the floating-point types are implementation dependent; that is, each implementation of C is free to define them. C++Builder uses the IEEE floating-point formats.See the topic on ANSI implementation-specific.

float and double are 32- and 64-bit floating-point data types, respectively. long can be used with double to declare an 80-bit precision floating-point identifier: long double test_case, for example.

The table of 32-bit data types, sizes, and ranges indicates the storage allocations for the floating-point types

Standard arithmetic conversions

When you use an arithmetic expression, such as a + b, where a and b are different arithmetic types, C++Builder performs certain internal conversions before the expression is evaluated. These standard conversions include promotions of "lower" types to "higher" types in the interests of accuracy and consistency.

Here are the steps C++Builder uses to convert the operands in an arithmetic expression:

1. Any small integral types are converted as shown in Methods used in standard arithmetic conversions. After this, any two values associated with an operator are either int (including the long and unsigned modifiers), or they are of type double, float, or long double.
2. If either operand is of type long double, the other operand is converted to long double.
3. Otherwise, if either operand is of type double, the other operand is converted to double.

4. Otherwise, if either operand is of type float, the other operand is converted to float.
5. Otherwise, if either operand is of type unsigned long, the other operand is converted to unsigned long.
6. Otherwise, if either operand is of type long, then the other operand is converted to long.
7. Otherwise, if either operand is of type unsigned, then the other operand is converted to unsigned.
8. Otherwise, both operands are of type int.

The result of the expression is the same type as that of the two operands.

Methods used in standard arithmetic conversions

Type Converts to Method

char int Zero or sign-extended (depends on default char type)
unsigned char int Zero-filled high byte (always)
signed char int Sign-extended (always)
short int Same value; sign extended
unsigned short unsigned int Same value; zero filled
enum int Same value

Special char, int, and enum conversions

Note: The conversions discussed in this section are specific to C++Builder.

Assigning a signed character object (such as a variable) to an integral object results in automatic sign extension. Objects of type signed char always use sign extension; objects of type unsigned char always set the high byte to zero when converted to int.

Converting a longer integral type to a shorter type truncates the higher order bits and leaves low-order bits unchanged. Converting a shorter integral type to a longer type either sign-extends or zero-fills the extra bits of the new value, depending on whether the shorter type is signed or unsigned, respectively.

Type categories

The four basic type categories (and their subcategories) are as follows:

Aggregate

Array
struct
union
class (C++ only)

Function
Scalar

Arithmetic
Enumeration
Pointer
Reference (C++ only)

void

Types can also be viewed in another way: they can be fundamental or derived types. The fundamental types are void, char, int, float, and double, together with short, long, signed, and unsigned variants of some of these. The derived types include pointers and references to other types, arrays of other types, function types, class types, structures, and unions.

A class object, for example, can hold a number of objects of different types together with functions for manipulating these objects, plus a mechanism to control access and inheritance from other classes

Given any nonvoid type type (with some provisos), you can declare derived types as follows
Declaring types

Declaration Description

type t; An object of type type
type array[10]; Ten types: array[0] - array[9]
type *ptr; ptr is a pointer to type
type &ref = t; ref is a reference to type (C++)
type func(void); func returns value of type type
void func1(type t); func1 takes a type type parameter
struct st {type t1; type t2}; structure st holds two types

Note: type& var, type &var, and type & var are all equivalent.


Void
Syntax

void identifier

Description

void is a special type indicating the absence of any value. Use the void keyword as a function return type if the function does not return a value.

void hello(char *name)

{
printf("Hello, %s.",name);

}

Use void as a function heading if the function does not take any parameters.

int init(void)

{
return 1;

}


{

return 1;

}





Void Pointers

Generic pointers can also be declared as void, meaning that they can point to any type.

void pointers cannot be dereferenced without explicit casting because the compiler cannot determine the size of the pointer object.