CLIENT SERVER COMPUTING PDF

adminComment(0)

An Introduction to Client Server Computing. Pages Python Programming: An Introduction to Computer Science John M. Zelle, Ph.D. Version rc2 Fall. Client/Server is a term used to describe a computing model for the development of computerized systems. This model is based on the distribution of functions. Client-Server Computing is an environment that satisfies the business need by appropriately allocating the application processing between the Client & Server.


Client Server Computing Pdf

Author:MADALINE BOYENS
Language:English, Dutch, German
Country:Barbados
Genre:Religion
Pages:356
Published (Last):23.08.2016
ISBN:326-3-32936-127-4
ePub File Size:26.57 MB
PDF File Size:19.61 MB
Distribution:Free* [*Register to download]
Downloads:29048
Uploaded by: MELISSA

The client knows how the data is organized and where it is. ▷ Different clients access the same applications different ways. ▷. Fat servers. ▷ The server more . What is Client/Server computing? Discuss the characteristics of. Client/Server system. Discuss the rightsizing and downsizing concept of the Client/Server. This trend has given rise to the architecture of the Client/Server Computing. The term Client/Server was first used in the s in reference to personal.

Explain the evolution of operating systems. Explain the network trends and Business considerations Explain Office systems, with neat diagram. Briefly explain the Transaction-processing Applications. Explain Investigation Applications. Discuss Dispelling the Myths. Define Minimal Training.

Discuss how the Development Time Is Shorter?

Characteristics of Client Server Computing

What is Reliability? Draw Restructuring Corporate Architecture. List out Existing Standards. What is OSI? List out Components of an Open Systems Environment. What is DME? Define DCE. What is Internetworking? What is Interoperability? List out Compatible Environments. List out Perceived Benefits. What is mean by Myths? Discuss micro-oriented professionals. What is mean by relational data structures? Compare host based application with Client server application. Define costs. What is mean by mixed platforms?

Explain Boston-based Boston systems Group. Explain Reliability. What is mean by Restructuring Corporate Architecture? Define open system. What is mean by Middleware? Define Platforms. What is mean by Networks? Define applications. What is the latest version of UNIX?

What are the components of Open systems? What are the factors for success? PART B 1. Explain in detail about dispelling the Myths? Discuss Obstacles-Upfront and Hidden. Explain Open Systems and Standards? Explain Standards-Setting Organizations? Discuss about key Factors for Success. Explain briefly dispelling the Myths.

Discuss mixed platforms. Explain briefly Open system and Standards. Discuss about the Existing Standards in Server operating Systems. Briefly discuss the Standards-Setting Organizations. List the Factors for success and explain them. Explain SQL access Group. What are major functions of the Client?

What is Client Hardware and Software? List out Client Components. Define Client Hardware. Define Client Software. What is meant by Interface Environments? Define graphical user interface. Define drag and drop. List out the control features. What is Client Operating Systems? List out communication between DOS with Windows 3. What is a GUI? What is DLL? What is DDE?

What is OLE? List the Screen Characteristics. What is Event Driven?

What is Native API? Differentiate the X Window and Windowing. What is Database Access? Define SQL Interface. What is Extended SQL? Draw event loop. What is hybrid? What is Generated Application Logic? Define Customized Application Logic. What is Client Software Products? List out GUI Environments. What is Motif? What is Open Look? What is flashpoint? List out the Database Access Tools.

Define data workbench. List out the data workbench tools. What is sequel ink? What is Interface Independence? Discuss Testing Interfaces. What is SQA? List out the Development Aids. Define Data Dictionaries and Repositories. What is the possible softwares in a client machine? Define GUI. What are the improvements are done in DOS? What are the three technologies are included in Windows 3X?

Explain Work Place Shell. Explain Screen characteristics. What are the common events performed by GUI? What are the tools used in API environment? Define memory Pools. Define Dynamic Link Library. What is mean by Presentation Manager? What are the tools present in Motifs environment?

Define Flashpoint. Describe atleast two advantages and disadvantages for each architecture. Explain with a sketch. Differentiate between Stateful and Stateless servers. Describe three-level schema architecture. Why do we need mapping between schema levels? Differentiate between Transaction server and Data server system with example. In client server architecture, what do you mean by Availability, Reliability, Serviceability and Security? Explain with examples.

In the online transaction processing environment, discuss how transaction processing monitor controls data transfer between client and server machines. Data access requirements have given rise to an environment in which computers work together to form a system, often called distributed computing, cooperative computing, and the like. To be competitive in a global economy, organizations in developed economies must employ technology to gain the efficiency necessary to offset their higher labour costs.

Re-engineering the business process to provide information and decision-making support at points of customer contact reduces the need for layers of decision-making management, improves responsiveness, and enhance customer service. Empowerment means that knowledge and responsibility are available to the employee at the point of customer contact. Empowerment will ensure that product and services problems and opportunities are identified and centralized.

For example, to remain competitive in a global business environment, businesses are increasingly dependent on the Web to conduct their marketing and service operations. Such Web-based electronic commerce, known as E-commerce, is very likely to become the business norm for businesses of all sizes.

Some of them are: i The changing business environment. The effective factors that govern the driving forces are given below: The changing business environment: Business process engineering has become necessary for competitiveness in the market which is forcing organizations to find new ways to manage their business, despite fewer personnel, more outsourcing, a market driven orientation, and rapid product obsolescence.

Due to globalization of business, the organizations have to meet global competitive pressure by streamlining their operations and by providing an ever-expanding array of customer services. Information management has become a critical issue in this competitive environment; marketing fast, efficient, and widespread data access has become the key to survival. Unfortunately, the demand for a more accessible database is not well-served by traditional methods and platforms.

The dynamic information driven corporate worlds of today require data to be available to decision makers on time and in an appropriate format. One might be tempted to urge that microcomputer networks constitute a sufficient answer to the challenge of dynamic data access. Globalization Conceptually, the world has begun to be treated as a market.

Information Technology plays an important role in bringing all the trade on a single platform by eliminating the barriers. IT helps and supports various marketing priorities like quality, cost, product differentiation and services.

The growing need for enterprise data access: One of the major MIS functions is to provide quick and accurate data access for decision- making at many organizational levels. Managers and decision makers need fast on-demand data access through easy-to-use interfaces. When corporations grow, and especially when they grow by merging with other corporations, it is common to find a mixture of disparate data sources in their systems. For example, data may be located in flat files, in hierarchical or network databases or in relational databases.

Given such a multiple source data environment, MIS department managers often find it difficult to provide tools for integrating and aggregating data for decision-making purposes, thus limiting the use of data as a company asset. Client server computing makes it possible to mix and match data as well as hardware. The demand for end user productivity gains based on the efficient use of data resources: The growth of personal computers is a direct result of the productivity gains experienced by end-users at all business levels.

End user demand for better ad hoc data access and data manipulation, better user interface, and better computer integration helped the PC gain corporate acceptance.

Which are provided by transaction servers that are connected to the database server. A transaction server contains the database transaction code or procedures that manipulate the data in database. A front-end application in a client computer sends a request to the transaction server to execute a specific procedure store on the database server.

No SQL code travels through the network. Transaction servers reduce network traffic and provide better performance than database servers. Liable to store semi-structured information like Text, image, mail, bulletin boards, flow of work. Protocols differ from product to product. For examples: Communicating distributed objects reside on server. Object server provides access to those objects from client objects. Object Application Servers are responsible for Sharing distributed objects across the network.

Each distributed object can have one or more remote method. Such services provide the sharing of the documents across intranets, or across the Internet or extranets. Like the client, the server also has hardware and software components. The hardware components include the computer, CPU, memory, hard disk, video card, network card, and so on.

Unlike the front-end client processes, the server process need not be GUI based. Keep in mind that back-end application interacts with operating system network or stand alone to access local resources hard disk, memory, CPU cycle, and so on. Once a request is received the server processes it locally. The server knows how to process the request; the client tells the server only what it needs do, not how to do it.

When the request is met, the answer is sent back to the client through the communication middleware. The server hardware characteristics depend upon the extent of the required services. For example, a database is to be used in a network of fifty clients may require a computer with the following minimum characteristic: The server process can be located anywhere in the network.

The server process may be shared. The server process can be upgraded to run on more powerful platforms. After accepting a request, the server forms a reply and sends it before requesting to see if another request has arrived. Here, the operating system plays a big role in maintaining the request queue that arrives for a server. Servers are usually much more difficult to build than clients because they need to accommodate multiple concurrent requests.

Typically, servers have two parts: Further, master server performs the following five steps Server Functions: The master opens a port at which the client request reached.

The master waits for a new client to send a request. If necessary, the master allocates new local port for this request and informs the client. The master starts an independent, concurrent slave to handle this request for example: Note that the slave handles one request and then terminates—the slave does not wait for requests from other clients. The master returns to the wait step and continues accepting new requests while the newly created slave handles the previous request concurrently.

Because the master starts a slave for each new request, processing proceeds concurrently.

In addition to the complexity that results because the server handles concurrent requests, complexity also arises because the server must enforce authorization and protection rules. Server programs usually need to execute with the highest privilege because they must read system files, keep logs, and access protected data.

1st Edition

The operating system will not restrict a server program if it attempts to access a user files. Thus, servers cannot blindly honour requests from other sites.

Instead, each server takes responsibility for enforcing the system access and protection policies. Finally, servers must protect themselves against malformed request or against request that will cause the server program itself to abort. Often it is difficult to foresee potential problems. Once an abort occurs, no client would be able to access files until a system programmer restarts the server. It also provides specialized services to the client process that insulates the front-end applications programmer from the internal working of the database server and network protocols.

In the past, applications programmers had to write code that would directly interface with specific database language generally a version of SQL and the specific network protocol used by the database server.

The Net BIOS command would allow the client process to establish a session with the database server, send specific control information, send the request, and so on. If the same application is to be used with a different database and network, the application routines must be rewritten for the new database and network protocols.

Clearly, such a condition is undesirable, and this is where middleware comes in handy. The definition of middleware is based on the intended goals and main functions of this new software category. The use of database middleware yields: The use of database middleware, make it possible for the programmer to use the generic SQL sentences to access different and multiple database servers. For example, a problem in developing a front-end system for multiple database servers is that application programmers must have in-depth knowledge of the network communications and the database access language characteristic of each database to access remote data.

The problem is aggravated by the fact that each DBMS vendor implements its own version of SQL with difference in syntax, additional functions, and enhancement with respect to the SQL standard. To accomplish its functions, the communication middleware software operates at two levels: In other words, it addresses how the computers are physically linked. The physical links include the network hardware and software.

The network software includes the network protocol. Recall that network protocols are rules that govern how computers must interact with other computers in network, and they ensure that computers are able to send and receive signal to and from each other.

Physically, the communication middleware is, in most cases, the network. Process process to process that is, with how the client and server process communicates. The logical characteristics are governed by process-to-process or interprocess communications protocols that give the signals meaning and purpose.

To understand the details we will refer to Open System Interconnection OSI network reference model which is an effort to standardize the diverse network systems. Figure 3. From the figure, we can trace the data flow: This layer establishes the connection to the client processes with the server processes. If the database server requires user verification, the session layer generates the necessary message to log on and verify the end user.

And also this layer will identify which mesaages are control messages and which are data messages. The transport layer generates some error validation checksums and adds some transport-layer-specific ID information.

This layer adds more control information, that depends on the network and on which physical media are used.

The data-link layer sends the frame to the next node. The data-link layer reconstructs the bits into frames and validates them. At this point, the data-link layer of the client and server computer may exchange additional messages to verify that the data were received correctly and that no retransmission is necessary.

The packet is sent up to the network layer. If the final destination is some other node in network, the network layer identifies it and sends the packet down to the data-link layer for transmission to that node. If the destination is the current node, the network layer assembles the packets and assigns appropriate sequence numbers. Next, the network layer generates the SQL request and sends it to the transport layer. If the communication between client and server process is broken, the session layer tries to reestablish the session.

The session layer identifies and validates the request, and sends it to the presentation layer. Although the OSI framework helps us understand network communications, it functions within a system that requires considerable infrastructure. The network protocols constitute the core of network infrastructure, because all data travelling through the network must adhere to some network protocol. In the previous section, we noted that different server processes might support different network protocols to communicate over the network.

For example, when several processes run on the client, each process may be executing a different SQL request, or each process may access a different database server. The transport layer ID helps the transport layer identify which data corresponds to which session. Each distribution pattern cuts the architecture into different client and server components. All the patterns discussed give an answer to the same question: How do I distribute a business information system?

However, the consequences of applying the patterns are very different with regards to the forces influencing distributed systems design. Distribution brings a new design dimension into the architecture of information systems.

It offers great opportunities for good systems design, but also complicates the development of a suitable architecture by introducing a lot of new design aspects and trap doors compared to a centralized system. There are several answers to this question. It significantly influences the software design and requires a very careful analysis of the functional and non-functional requirements. The system supports distributed business processes, which may span a single department, a whole enterprise, or even several enterprises.

Generally, the system must support more than one type of data processing, such as On-Line Transaction Processing OLTP , off-line processing or batch processing. Typically, the application architecture of the system is a Three-Layer Architecture, illustrated in Fig.

The user interface handles presentational tasks and controls the dialogue the application kernel performs the domain specific business tasks and the database access layer connects the application kernel functions to a database.

Our distribution view focuses on this coarse-grain component level. Within these model two roles, client and server classify components of a distributed system. Client server systems tend to be far more complex than conventional host software architectures. To name just a few sources of complexity: GUI, middleware, and heterogeneous operating system environments. It is clear that it often requires a lot of compromises to reduce the complexity to a level where it can be handled properly.

Processing style: Different processing styles require different distribution decisions. Batch applications need processing power close to the data. Therefore, off-line and batch processing may conflict with transaction and on-line processing. Distribution vs. We gain performance by distributed processing units executing tasks in parallel, placing data close to processing, and balancing workload between several servers.

But raising the level of distribution increases the communication overhead, the danger of bottlenecks in the communication network, and complicates performance analysis and capacity planning.

In centralized systems the effects are much more controllable and the knowledge and experience with the involved hardware and software allows reliable statements about the reachable performance of a configuration.

The requirement for secure communications and transactions is essential to many business domains. In a distributed environment the number of possible security holes increases because of the greater number of attack points. Therefore, a distributed environment might require new security architectures, policies and mechanisms.

Abandoning a global state can introduce consistency problems between states of distributed components. Relying on a single, centralized database system reduces consistency problems, but legacy systems or organizational structures off-line processing can force us to manage distributed data sources.

Software distribution cost: The partitioning of system layers into client and server processes enables distribution of the processes within the network, but the more software we distribute the higher the distribution, configuration management, and installation cost. The lowest software distribution and installation cost will occur in a centralized system. This force can even impair functionality if the software distribution problem is so big that the capacities needed exceed the capacities of your network.

The most important argument for so called diskless, Internet based network computers is exactly software distribution and configuration management cost.

Reusability vs. Placing functionality on a server enforces code reuse and reduces client code size, but data must be shipped to the server and the server must enable the handling of requests by multiple clients. To take a glance at the pattern language we give an abstract for each pattern: This pattern partitions the system within the presentation component.

One part of the presentation component is packaged as a distribution unit and is processed separately from the other part of the presentation, which can be packaged together with the other application layers.

This pattern allows of an easy implementation and very thin clients. Host systems with terminals is a classical example for this approach.

Network computers, Internet and intranet technology are modern environments where this pattern can be applied as well. Instead of distributing presentation functionality the whole user interface becomes a unit of distribution and acts as a client of the application kernel on the server side. The pattern splits the application kernel into two parts which are processed separately.

This pattern becomes very challenging if transactions span process boundaries distributed transaction processing. The database is a major component of a business information system with special requirements on the execution environment. Sometimes, several applications work on the same database. The database is decomposed into separate database components, which interact by means of interprocess communication facilities. With a distributed database an application can integrate data from different database systems or data can be stored more closely to the location where it is processed.

Mainframes systems are highly centralized known to be integrated systems. Where dumb terminals do not have any autonomy. Mainframe systems have very limited data manipulation capabilities. From the application development point of view. Various computer applications were implemented on mainframe computers from IBM and others , with lots of attached dumb, or semi-intelligent terminals see the Fig. Mainframe-based Environment There are some major problems with this approach: D Very inflexible.

D Mainframe system are very inflexible. D Vendor lock-in was very expensive. D Centralized DP department was unable to keep up with the demand for new applications. The server version of network operating system is installed on the server or servers; the client version of the network operating system is installed on clients. A LAN may have a general server or several dedicated servers. A network may have several servers; each dedicated to a particular task for example database servers, print servers, and file servers, mail server.

These servers enable many clients to share access to the same resources and enable the use of high performance computer systems to manage the resources. A file server allows the client to access shared data stored on the disk connected to the file server. When a user needs data, it access the server, which then sends a copy, a print server allows different clients to share a printer. Each client can send data to be printed to the print server, which then spools and print them.

In this environment, the file server station server runs a server file access program, a mail server station runs a server mail handling program, and a print server station a server print handling program, or a client print program.

Users, applications and resources are distributed in response to business requirements and linked by single Local Area Networks. LAN Environment 3. Internet-based Environment The internet also puts fat client developers on a diet. Since most internet applications are driven from the Web server, the application processing is moving off the client and back onto the server.

The web browsers are universal clients. A web browser is a minimalist client that interprets information it receives from a server, and displays it graphically to a user.

The browser executes the HTML commands to properly display text and images on a specific GUI platform; it also navigates from one page to another using the embedded hypertext links.

HTTP server produce platform independent content that clients can then request. A server does not know a PC client from a Mac client — all web clients are created equal in the eyes of their web server.

Browsers are there to take care of all the platform-specific details. At first, the Web was viewed as a method of publishing informaton in an attractive format that could be accessed from any computer on the internet. A server system supplies multimedia documents pages , and runs some application programs HTML forms and CGI programs, for example on behalf of the client.

Explain Peer to Peer architecture. Explain the three-level architecture of database management system. Also explain the advantages and disadvantages of DBMS. The basic building block his computer equivalent of information-on-paper is called data. Data is information in its simplest form, meaningless until related together in some fashion so as to become meaningful. A particular file can be searched electronically, even if only remembering a tiny portion of the file contains.

A database, generally defined, is a flexible, hierarchical structure for storing raw data, which facilitates its organization into useful information.

All data on computer is stored in one kind of database or another. A spreadsheet is a database, storing data in an arrangement of characters and formatting instructions.

What a database does, then, is breakdown information into its most fundamental components and then create meaningful relationships between those components.

We depend on databases of varying configurations and complexity for all our computerized information need. Evolution of Database Technologies Using a database you can tag data, relating it to other data in several different ways, without having to replicate the data in different physical locations. The functionality is spilted between a server and multiple client systems, i. Distributed database system: Geographically or administratively distributed data spreads across multiple database systems.

Parallel database system: Parallel processing within computer system allows database system activities to be speeded up, allowing faster response to transaction; queries can be preceded in a way that exploits the parallelism offered by the underlying computer system.

Centralized database system: Centralized database systems are those run on a single system and do not interact with other computer systems. They are single user database systems on a PC and high performance database system on high end server system. Without the database, servers would be impractical as business tools.

True, you could still use them to share resources and facilitate communication; but, in the absence of business database, a peer-to-peer network would be a more cost effective tool to handle these jobs. So the question of client server becomes a question of whether or not your business needs centralized database. Sharing and communications are built on top of that. Therefore, the client computer resources are available to perform other system chores such as the management of the graphical user interface.

Data may be stored in one site or in multiple sites. The network links each of these processes. The client computer, also called workstation, controls the user interface. The client is where text and images are displayed to the user and where the user inputs data.

The user interface can be text based or graphical based. The server computer controls database management.

An Introduction to Client Server Computing

The server is where data is stored, manipulated, and stored. Business logic can be located on the server, on the client, or mixed between the two. This type of logic governs the processing of the application. Following are the reasons for its popularity: The underlying reason is simple: This is unlike mainframebased systems, which typically use proprietary components available only through a single vendor.

Therefore, it is possible to build an application by selecting an RDBMS from one vendor, hardware from another vendor.

Customers can select components that best fit their needs. Simplified data access: Mainframe computing was notorious for tracking huge amounts of data that could be accessed only by developers.

Instead, data access is providing by common software products tools that hide the complexities of data access. Interaction between client and server is in the form of transaction in which client makes database request and receives a database response. In the architecture of such a system, server is responsible for maintaining the database, for that purpose a complex database management system software module is required.

Importance of such architecture depends on the nature of application, where it is going to be implemented. And the main purpose is to provide on line access for record keeping. Suppose a database with million of records residing on the server, server is maintaining it. Some user wants to fetch a query that result few records only. Then it can be achieved by number of search criteria. An initial client query may yield a server response that satisfies the search criteria.

The user then adds additional qualifiers and issues a new query. Returned records are once again filtered. Finally, client composes next request with additional qualifiers. The resulting search criteria yield desired match, and the record is returned to the client. On the other hand, in case of single user workstations such a storage space and high power is not required and also it will be costlier.

Multi-threaded architecture. Hybrid architecture. Process-per-client architecture: As the name reveals itself server process considers each client as a separate process and provides separate address space for each user. As a result, consumes more memory and CPU resources than other schemes and slower because of process context switches and IPC overhead but the use of a TP Monitor can overcome these disadvantages.

Performance of Process-per-client architecture is very poorly when large numbers of users are connecting to a database server. But the architecture provides the best protection of databases. Process-per-client Architecture ii Multi-threaded architecture: Architecture supports a large numbers of clients running a short transaction on the server database. Provide the best performance by running all user requests in a single address space. But do not perform well with large queries.

Multi-threaded architecture conserves Memory and CPU cycles by avoiding frequent context switches. There are more chances of portability across the platforms. But it suffers by some drawback first in case of any misbehaved user request can bring down the entire process, affecting all users and their requests and second long-duration tasks of user can hog resources, causing delays for other users. And the architecture is not as good at protection point of view. Some of the examples of such architecture are: Multi-threaded Architecture iii Hybrid architecture: Hybrid architecture provides a protected environment for running user requests without assigning a permanent process for each user.

Also provides the best balance between server and clients.

Main task of this is to assign client connection to a dispatcher. These processes are responsible for placing the messages on an internal message queue. And finally, send it back to client when response returned from the database. Responsible for picking work off the message queue and execute that work and finally places the response on an output message queue.

This middleware software is divided into three main components. As shown in the Fig. These components or their functions are generally distributed among several software layers that are interchangeable in a plug and play fashion.

The application-programming interface is public to the client application. The programmer interacts with the middleware through the APIs provided by middleware software. In other words, the middleware API allows the client process to be database independent. Such independence means that the server can be changed without requiring that the client applications be completely rewritten. The database translator translates the SQL requests into the specific database server syntax.

Because a database server might have some non-standard features, the database translator layer may opt to translate the generic SQL request into the specific format used by the database server. The network translator manages the network communication protocols. Remember that database server can use any of the network protocols.

Figure 4.

Existence of these three middleware components reveals some benefits of using middleware software; according to that clients can: In this case, the client application uses a generic SQL query to access data in two tables: The database translator layer of middleware software contains two modules, one for each database server type to be accessed.

Multiple Database Server Access Through Middleware Each module handles the details of each database communications protocol. The network translator layer takes care of using the correct network protocol to access each database. When the data from the query are returned, they are presented in a format common to the client application. The end user or programmer need not be aware of the details of data retrieval from the servers. Actually, the end user might not even know where the data reside or from what type of DBMS the data were retrieved.

The differences in these two can be easily understood by the two figures 4. Distributed data refers to the basic data stored in the server, which is distributed to different members of the work team.

Client Server Computing Notes_mk

While distributed processing refers to the way different tasks are organized among members of the work team. Within a replicated distributed database, the scenario depicted in figure 4. The user does not need to know the data location, how to get there, or what protocols are used to get there. Data accessibility increases because end users are able to access data directly and easily, usually by pointing and clicking in their GUI-based system.

End user can manipulate data in several ways, depending on their information needs. For example, one user may want to have a report generated in a certain format, whereas another user may prefer to use graphical presentations. The data request is processed on the server side; the data formatting and presentation are done on the client side.

The database server will take care of locating the data; retrieving it from the different locations, assembling it, and sending it back to the client. In this scenario, the processing of the data access and retrieval. The client TP interacts with the end use and sends a request to the server DP. The server receives, schedules, and executes the request, selecting only those records that are needed by the client.

The server then sends the data to the client only when the client requests the data. The database management system must be able to manage the distribution of data among multiple nodes.

The DBMS must provide distributed database transparency features like: Number of relational DBMS, which are started as a centralized system with its components like user interface and application programs were moved to the client side.

With standard language SQL, creates a logical dividing point between client and server. Hence, the query and transaction functionality remained at the server side. Exactly how to divide the DBMS functionality between client and server has not yet been established.

Different approaches have been proposed. One possibility is to include functionality of a centralized DBMS at the server level. Each client must then formulate the appropriate SQL queries and provide the user interface and programming language interface functions. The client may also refer to a data dictionary that includes information on the distribution of data among the various SQL servers, as well as modules for decomposing a global query into a number of local queries that can be executed at the various sites.

Interaction between client and server might proceed as follows during the processing of an SQL query: Each site query is sent to the appropriate server site.

The interaction between client and server can be specified by the user at the client level or via a specialized DBMS client module that is part of DBMS package. For example, the user may know what data is stored in each server, break-down a query request into site subqueries manually, and submit individual subqueries to the various sites. The resulting tables may be combined explicitly by a further user query at the client level.

The alternative is to have the client module undertake these actions automatically. In a typical DBMS, it is customary to divide the software module into three levels: The server software is responsible for local data management at site, much like centralized DBMS software.

The client software is responsible for most of the distributions; it access data distribution information from the DBMS catalog and process all request that requires access to more than one site. It also handles all user interfaces. The communication software sometimes in conjunction with a distributed operating system provides the communication primitives that are used by the client to transmit commands and data among the various sites as needed.

This is not strictly part of the DBMS, but it provides essential communication primitives and services. The client is responsible for generating a distributed execution plan for a multisite query or transaction and for supervising distributed execution by sending commands to servers. These commands include local queries and transaction to be executed, as well as commands to transmit data to other clients or servers. Hence, client software should be included at any site where multisite queries are submitted.

Another function controlled by the client or coordinator is that of ensuring consistency of replicated copies of a data item by employing distributed or global concurrency control techniques. The client must also ensure the atomicity of global transaction by performing global recovery when certain sites fail.

One possible function of the client is to hide the details of data distribution from the user; that is, it enables the user to write global queries and transactions as through the database were centralized, without having to specify the sites at which the data references in the query or transaction resides.

This property is called distributed transparency. Some DDBMSs do not provide distribution transparency, instead requiring that users beware of the details of data distribution. DDBMS use distributed processing to access data at multiple sites. It is obvious that all kinds of information the corporate world is providing through web pages. Also through links on home page they provides the facilities to enter into the corporate intranet, whether it is finance, human resource, sales, manufacturing or the marketing department.

Departmental information as well as services can be accessed from web pages Even the web is powerful and flexible tool for supporting corporate requirements but they provides a limited capability for maintaining a large, change base of data.

Web-database integration has been illustrated in Fig. This reference triggers a program at the web server that issues the correct database command to a database server. The output returned to the web server is converted into a HTML format and returned to the web browser. Web Database System Integration 4. The only connection to the database server is the web server. The addition of a new type of database server does not require configuration of all the requisite drivers and interfaces at each type of client machine.

Instead, it is only necessary for the web server to be able to convert between HTML and the database interface. Browsers are already available across almost all platforms. Which relieves the developer of the need to implement graphical user interface across multiple customer machines and operating systems?

In addition, developers can assume that customers already have and will be able to use browsers as soon as the internet web server is available. Avoiding deployment issues such as installation and synchronized activation. Large portion of the normal development cycle, such as development and client design, do not apply to web-based projects.The existence and effective use of such tools allows companies to re-engineer their operational processes, effectively changing the way they do the business.

The database translator translates the SQL requests into the specific database server syntax. Our distribution view focuses on this coarse-grain component level. Such performance modeling should be done at the server end, the client end, and the network layer.

Single Client, Single Server ii Multiple clients, single server:

MILAGRO from Oklahoma City
See my other posts. I have a variety of hobbies, like indoor american football. I do enjoy reading books stealthily .
>