Web chat room

xiaoxiao2021-03-06  62

Predictions illustrate how to implement a push-based WEB chat room under the Microsoft system. And tell about how to

Reduce the burden on the server side in the case. I don't introduce too many programming skills here, they are not the focus of this article.

The purpose of this paper is mainly to introduce the organizational structure of the entire chat room and the internal structure of its server side.

The reason why the server is described in detail is because of any web application, especially if you want to chat room.

WEB program, the performance of the server is very important. Compared to the client, the server is subject to software design quality

The amount of quantity and code preparation quality is greater. If the code efficiency is low, if it is placed on the client, its effect is only

Is a constant; in the server, with the increase of customers, its adverse effect on the server will increase linearly

Long, it will become an index growth when it reaches a certain degree.

Thus, it is very important to carefully design and writing server, which is one of the purposes of this article. As for the client, the main

To tell how to interface with the server, and briefly describe the skills of user interface skills and submit requests, more detailed

The category belongs to the scope of web page, non-cultural point.

This article uses a Windows 2000 operating system with your own IIS 5.0 Web service software;

IE 5.0 or later. The server is developed using Visual C 6.0, using ISAPI and COM technology, with the ASP knot

Conquer to handle user requests. The client writes scripts using JavaScript.

Now, there are many chat rooms on the Web, which makes users chat through the browser without having to download dedicated.

Client. The advantage of the web chat room is also this, it has a wide range of applications, almost all users accessing Internet

There is a browser. Downloading a dedicated client may be a user's more dissent, because relative to the use of browsers, they

It is more cumbersome. More importantly, use browsers to better control security, programs in the browser

It is strictly restricted, they will not be able to make persistent users (or very difficult).

The existing web chat room can probably be divided into two categories from the way of acquiring information, and one is based on refresh, the other

It is based on "push".

The principle of refreshing chat room is to use a certain mechanism in the client's web page, so that the page is every time

Automatically refresh, refresh the issued HTTP request, get the latest speech information from the server. Available timing

The refresh mechanism has the setInterval () method of the Meta tag and script of HTML, which is not discussed in detail here. This type of chat room

The advantage is that the achievement is simple, and if the network delay is not a problem, its overall cost will be optimal. But real

In addition, network speed is often a very important issue, and because this chat room network is transmitted, in a low

The delay will become obvious on the speed network. Moreover, although it is able to refresh it, the user is still unable to see

A new release between two refreshes. Frequent data transfer tasks will also increase the burden of the server.

Since the key to solving the problem is to eliminate repeat transfer of old data, then the best way is to appear in new information.

It starts to transfer. The only server that knows when all new information will appear. Because of the second, the answer is a kind

Server active transmission mechanism. At present, the data transfer on the web is carried out in "pull".

That is, the customer makes a request, the server response request, during this process, the client is active, and the server side is

Passive. The refreshing chat room is just along this mechanism. Now it is a way to be called "push". When there is a new message arrival, we hope that the server is directly pushed to the client. But very obvious

Due to the limitations of existing protocols, the server side cannot directly initiate connecting to the client and transmitting data, the entire implementation

The current space is fully limited in the existing drawing method.

In order to achieve the push in the existing pull environment, the way to pull will be flushed, and the push into the pull, and one

The process extends to the length of the entire chat process. A connection to the server is initiated by the client in a usually pulled manner

End, then, the connection remains, when there is a new message arrives, the server will open the information via the user.

Transfer. From the client, the process seems to have a long pull process, and the server response is intermittent.

Each response is a (or more) new information. It can be seen that all pushers are based on a user initiated,

Here is "push channel" here, the channel is "done" by the user.

Actual test proves that the web request has a good continuity, and there is no response in the server (no new information).

In the case of, the push channel can last for a long time, at least 5 minutes. This is enough, actually the server

And the customer should not have a 5-minute time without interaction, which will be explained later.

The advantages of the push-based chat room are very obvious, it is minimized network bandwidth, reduces network delay, and reduces

The burden on the server. But there is also a shortcoming, that is, it is difficult to achieve, if it is not working, the overall effect may

Will not be refreshed based on a refreshing chat room. Further, the development of the refreshing chat room is more complicated, thus causing its development costs.

Higher, if the network bandwidth and server burden is actually not a problem, its overall cost cost will be higher than refresh-based

chatroom.

Client

The principle of push has been determined, and now it is now realized. Relative to the implementation of the server, the client's implementation may

Tough. Because the client is not like server-side - how to do, the development freedom in server-side is quite large.

Although it is a bit exaggerated, as long as you like, the use of core code has not been tasted; but the client is very different.

. Here, the space implemented is limited to a small range, and it is best to use only HTML in the implementation.

It can be implemented in the refresh chat room, but it is almost impossible in the push-based chat room. In order to achieve it, it will

It is necessary to relax the restrictions on the client to achieve space, it is necessary to use the activity script, or use IE JScript, combined

Its DOM (Document Object Model) to implement a "client" embedded IE.

In IE, a page can be "unfinished", when the URL of a page is submitted,

Constantly arrive in the server-side data, the state of this page has been "Interactive",

It can be interactive. At this point, its DOM has been initialized and can be accessed. This page can be used as a push channel

Information can be continuously pushed to the page.

But unfortunately IE did not provide events that notified data arriving within the page, which may be related to three events.

: "OnReadyStateChange", "OnDataavailable" and "OnpropertyChange" are not here

Applicable or unavailable. Thus, a timer has to be set, and the cycle is to detect if there is new data to arrive. Base

Different from the timing refresh mechanism used in the refresh chat room, the detection here is completely in the client, which detects the local page; and the refresh-based chat room is the detection server. Therefore, the timing test here

Test does not increase the burden on the network and servers. The accuracy of the timer will affect the delay shown on the user interface, here

The interval is set to 0.5 seconds, and the delay caused by this interval is acceptable, and it does not cause the client processor.

burden.

The DOM is a good way to manage documents. In order to facilitate the client to detect new information, use the DOM to push the server.

Each message is embedded in a pair of HTML tags (here "

" tag is used because of its small space). according to

According to Microsoft documents, use DOM, you can get a collection of a certain mark from a designated root, and each element is in the collection.

Sui is a tag of this type, and the set has an attribute indicating the number of elements in the collection - "Length". Per element

It can be positioned by the index in this collection, and the index is marked in accordance with the element in the HTML document.

The position is incremented, and the interval is 1.

The client always records the number of marked

when the last search is retrieved. When a timer is coming, as long as it is detected

When the number of indicia

, the number of previous records can be judged to have a new message arrival. Also

The index of new information can be predicted, and the index can be marked from the collection according to its index.

Out, the tags taken out for each (possibly reaching multiple information in two fixed time separation) can pass through its InnerHT

ML attribute takes out the message content, and then pays a feet analysis.

Since the push channel is essentially a TCP connection, it is not guaranteed that it will not be interrupted, and it is necessary to try to pay the accident.

Interrupt, otherwise, once the interrupt occurs, the user will receive any information. The method of handling unexpected interruption is simple,

Once the connection is interrupted, the readyState property of the page will change to Complete, as long as the value of this attribute is detected,

The current actual state of the client can determine whether an unexpected interrupt occurs. Once the interrupt occurs, the client script should

Re-submit the page in order to open the push channel again. The client marker counter should also be reset at this point.

For each message, according to its content, the corresponding operation is performed by the script, interact with the user, such as display

A speech content, or because a user exits the chat room and removes the user from the list. Thus

The main body of the user is generated by the script based on the current state. To use script to generate page content, still want

Use the DOM, use the CreateElement method for the built-in Document object, as well as the appendchild side of each tag

Method, INSERTBEFORE method and REMOVECHILD method to operate DOM tree, and the results of the operation will be in real time on the user interface.

Expanded. The reason why not using the innerhtml property is directly inserted, because its performance is quite low, it is displayed

The delay is not as convenient as DOM.

Another job responsible for the script is to interact with the server, the user's statement, entry, and exit, etc.

A proxy operation by the script because the user is impossible (or inconvenient) to interact directly with the server. Interact with server

Many ways, the simplest is to use forms (form). However, the submission form will refresh its page, and base

It is not necessary to refresh the chat room. If you want to use a form, you must embed one or more iframs.

E, in its inside loads a page containing the desired form, using scripts to copy the user's input from the main page to these forms, and then submit them by calling the subscript. At this time, I met with the implementation of time.

The problem, the main page cannot know if the page submitted is returned, and it will also need timing detection, which will increase

Realization complexity, while bringing more instability. Another feasible method is the page returned at each return

In the middle, embed the script, hosted the online event, and then notify the main page via the method called by cross-frame.

Load is completed. But these methods are not ideal. The best way is to use IE5 newly supported XMLHTTP components. Should

Component's ProgID is "Microsoft.xmlhttp", as is the name, it mainly operates XML data. What is important is it?

Support asynchronous operation and callback functions, use it to interact with the server by pure script. Universal

It is enough, there is no reference to its XMLHTTP object in the context of the callback function, other methods must be used (for example

Global variables can be obtained. If Microsoft makes the callback function bring a reference to its hidden XMLHTTP object

Parameters, or let the THIS variable reference the object better.

For more information on using script operations UI and user interaction, and using scripts and server interactions, not this article

The focus is not described here.

The server side is on the server, and most important is to implement a push mechanism that meets the above standards. This mechanism is most effective

The feature is asynchronous, it is often, or most of the time is not working, only when new information arrives

To active and send these new information for users.

Windows is a good operating system that supports asynchronous operations. It is natural for the above requirements.

With multi-thread, each thread is responsible for one user connection, and most of its time is hanging, when the new message arrives, it

Woke up with some mechanism and then push data for users. Therefore, it seems that just write a service using multi-threaded

The server can process the user request. But in fact, the client is not just the information pushed by the server, and they must

Interact with the server, such as sending the user's voice to the server. Obviously, handling these interactions

Single way is to use ASP, ASP scripting features easy to analyze and process user requests, organizations, and generate reply

Rong, and have good elasticity, it is very convenient to maintain. However, ASP does not have the ability to complete the server

It is not used to handle a lot of asynchronous operation when designing. In order to be able to follow the fish and bear's paw, it is necessary to

Combine, in all the mechanisms that make the ASP and the binary code can be visited, the best is Microsoft's COM model, A

SP fully supports COM, just make COM, you can do well with ASP.

Since you choose COM, you have to select the executable file mode and thread model. For performance, the best way is to

Make a DLL, let it load into the IIS process (actually the sub-process of IIS, DLLHOST.exe) is straight

Toned, so that the MARSHALING required to cross the process boundary is saved, thus improving performance. Since you use the DLL,

Then the implementation of the push must also be in the DLL, otherwise, if the part of the implementation is run in other processes, then

The DLL is tantamount to a proxy, and between the process is still unable to avoid the performance improvement of the components in the DLL.

Cancel. Since the push part has been incorporated into the component, the implementation of the push portion must be multi-threaded, that is, the entire program uses multi-threaded that is unavoidable, more performance, and the components should undoubtedly select multithreaded models.

Free or Both. Now, according to the design, the code implemented the push will be executed in the IIS process, in the IIS process.

The implementation of the code is implemented, and there are many ways, but the best way is too much to use IIS's ISAPI interface. use

This interface, the application interacts directly with IIS, not the client, IIS responsible for thread and client connection

The reason, while responsible for information transmission and reception and some analysis work. This can significantly reduce the amount of code, lowered

Development complexity, better integration with the web server to improve overall performance.

The ISAPI section refers to the ISAPI extension, as a push implementation part, you can use a normal request .dll

The service is requested to request it. You can also configure it as a scripting engine, map a file extension to the engine.

Let it control a sub-area, users can get any name file with the extension under this area.

Get service.

In this way, the DLL is actually two components, one as an assembly interact with the ASP, and the other as the intersection with IIS.

Mutual ISAPI expansion. From the perspective of IIS, two parts are loaded and initialized - each is

When the first time was called, only the same DLL is loaded. Because it may appear, although in the same DLL, one

The score is already working, and the other part has not been initialized, and the synchronization problem of these two parts should be considered when designing

.

Next, the server-side structure is designed, the focus of server-side design is to maximize the performance of the server. Where the most

The key is reasonable use of multi-threaded technology. In addition, try to use the function of the operating system to simplify the manufacture

Now. For example, the operating system selected here is Windows 2000, and Windows 2000 has built-in hash table implementation.

So, there is no need to write your own hash table when using a large number of strings.

Chat room components should be able to support multiple chat rooms, not single, to slide the chat content. Thus

Ask a "hall", when the user just logs in to the chat room, in the hall, you can see the status of each chat room.

Decide which chat room. In order to manage each open chat room, you need a class to describe those open chats.

The day, here is called a chat room descriptor. Similarly, in order to manage each online user, it is also necessary to call it

The class of the user descriptor describes them. Finally, in order to manage multiple instances of these chat rooms and user descriptors,

To have a total directory to index them.

Since the chat room descriptor and user descriptor are multi-instance, especially user descriptors, frequency during operation

Fully distribution and release, it is easy to produce memory debris, and it is necessary to use special management methods to them. Pass right

These classes can solve the debris problem using their own private heap.

Considering that when the user logs in, immediately initiate a connection to the server, the connection request component's ISAPI section,

The ISAPI section verifies the user's descriptor after identifying the user, then the request's ECB (Extension Control

Block) As the push channel to penetrate into the descriptor. The final function returns Pending to IIS, indicating that the processing is not completed.

When the end user speaks, the presentation content is transferred to the server, and IIS will be handed over to the ASP, and the ASP verifies the user.

Identity, after some analysis and processing request, through the interface of the COM component, call the corresponding method, and notify the component a user statement. Components Check the location of the user, send it to each user in the chat room.

Word content, by calling the IIS's ISAPI interface, the content is sent by the connection established by each user as a response

Send out - this is the whole "push" process.

For the process of sending information, if each user can take a thread, you can use one

Supreme WriteClient () call to complete the send. However, although a thread is comparable to the resources consumed by the process.

Less, but if the number of users is much, the server should maintain a considerable number of threads, and the resource consumption caused by it will still

It is very accurate. More importantly, most of these threads will be in sleep, nothing, but air

Vendor resources. Thus, there is a better way to use threads, rather than simple letting each thread fall into

wait.

In this case, the best solution is to use the thread pool, temporarily do not need to work, is not falling, etc.

It is recycled. The reclaimed thread enters a thread pool, and when there is a new task, enable it from the thread pool

A thread to complete it, the thread will return to the thread pool after the task is completed, so reciprocated. At the same time, the thread pool should

It is possible to ensure that the number of threads can be guaranteed to ensure that there are not many threads that can ensure their cache.

It's enough. Another way to manage thread pools is to create threads equal to the number of CPUs, the purpose is to get the maximum and generate energy.

Force, but no thread. However, this method is not applicable here, mainly, it is hoped to process all

User's request (although some resources will be wasted), not the most efficient use of CPUs and use multiple uses

Unfairness of the household. In other words, when multiple users will request a service at the same time, although the services of each user will be

It has become slow, but still wants to achieve the average server service capability; rather than processing one handle

One, if so, some users will wait too long, and the client will timeout.

According to the rule of automatic adjustment, it is more complicated to write a thread pool, but in fact, Windows 2000 has built-in this.

Support for species thread pool. It can take advantage of the Windows 2000 thread pool, which is self-evident. real

Alternatively, the IIS is also using the thread pool when ISAPI programs.

Using a thread pool means that the same session will be processed by different multiple threads, thus handling sessions.

The program must be a thread independent, which also requires IIS support because IIS related data is also a session processing.

Minute. Fortunately, ISAPI provides good support for this. Isapi has asynchronous operational capabilities, initial

The procedure can declare that the session is not ended by returning to Pending, so that the session enters an asynchronous state, when the session is handled

By calling the specified interface to end the session. As mentioned earlier, every session of ISAPI is described by an ECB.

According to this structure, it is responsible for maintenance, and its address is constant throughout the session, through a pointer,

Access it at any time, thus ensuring that the thread is independent.

Now, for users who need to receive information, if his push channel is temporarily unavailable, then there must be some mechanism.

Enough to the speech, when the user's push channel is reformable, then send the cache's speech content.

Otherwise, the user will lose someone to speak on him, and network connection interrupt is more common, if there is no mechanism,

The problem will become very serious. In addition, due to the limitations of IIS itself, the ISAPI extension can only have an asynchronous operation at a time. The new information may arrive during the asynchronous operation, and it is no longer possible to initiate another.

Asynchronous operation, in order to make the speech information at this time, it also requires a cache mechanism.

Now, for users who need to receive information, if his push channel is temporarily unavailable, then there must be some mechanism.

Enough to the speech, when the user's push channel is reformable, then send the cache's speech content.

Otherwise, the user will lose someone to speak on him, and network connection interrupt is more common, if there is no mechanism,

The problem will become very serious. In addition, due to the limitations of IIS itself, the ISAPI extension can only have one at a time.

Asynchronous operation. The new information may arrive during the asynchronous operation, and it is no longer possible to initiate another.

Asynchronous operation, in order to make the speech information at this time, it also requires a cache mechanism.

From the surface, the cache is "per user", that is, caches each specific user. However, from the hair

The content itself is analyzed. If a speech is not available, the speech is to be in service.

The server is caching multiple copies, which is obviously not worthless. Thus, the cache mechanism must ensure that the cache can distinguish between

Users can avoid repeating the same content. The final result is caused a central caching mechanism, all

The entity of the content is stored in the central cache, allocating an ID for each cached, called a message ID.

Then, each destination user involved in this statement, send a notification containing the ID, telling it that information arrives

Each user only needs a relatively small local buffer to cache these notifications can solve the accepted traffic accident.

Interrupt issues saving server resources. When the trip is reused, the message ID in the local cache is

The corresponding information can be removed from the central cache and send it to the user.

The following question is how the central cache should manage. Since there is only one central cache, the access to it will inevitably

Frequent. Therefore, I hope that the access needs to block as little, which requires the use of as little synchronous mechanism to

And try to use a lightweight synchronization mechanism. In the implementation, divide the entire cache into several buffers, each buffer

There is a message, the size of the buffer is fixed (because the central cache is mainly caching user statement,

The maximum length of the limit is determined to determine the size of the buffer). All buffers of the central cache are used cycled, namely

After using the last buffer, the next unused buffer is the first. Multithreading in the chat room

In the environment, the interlock function can be used in order to make multiple threads from selecting the same buffer. Interlocking function is

Lightweight synchronization mechanisms, relatively other synchronous mechanisms, and the performance loss caused by interlocking functions is tiny.

With this mechanism, an important issue is to avoid the coverage of the message, that is, the new message is written.

The buffer item where the message is still use. In practical applications, as long as the number of buffer items is set up, you can

Avoid or minimize coverage. Although coverage has caused risks, it is more considerably.

At the same time, the central cache must also ensure that the ID assigned to each message that enters the cache is not repetitive, otherwise

It is not available for a user's tendency to teach, and its descriptor caches a message ID for a long time.

The memory may also assign the ID to a new message. When the user's push channel is re-available, the user will receive an error.

information. It can be seen that the message ID cannot be simply using the buffer number.

In the final central cache implementation, the message ID uses a 64-bit integer representation, where high 32 bits are read pointers, representing a buffer index; low 32 bit is serial number, indicating that the buffer has been cached on the buffer The total number of messages.

In this way, the news on different buffers must have different IDs, which are entry different from the same buffer.

The message, its ID is to repeat 232 times after the message content of the buffer item is repeated, and the reference to the message should be

This is no longer present, so that in the foreseeable case, the message ID will not be repeated. Buffer sequence number can also

Use the interlock function to get, but due to the assignment of the buffer, it has been separated from a certain extent, and the buffer item is filled.

When the time is not very long, while a thread is operated while the buffer is operated, there should be another operation.

Sample buffer item thread. Therefore, it is actually not necessary to use interlock functions, just simply add a serial number.

When extracting information from the central cache, you must first check if the buffer serial number recorded in the message ID is in the actual buffer entry.

The serial number is consistent, if the information is notified, otherwise the information is lost. Also, the buffer may be mentioned

The process of taking information is reallocated, at which time another thread will be written to the buffer, and you must try this.

The case, otherwise the error will also be sent. As long as the quantity number before depositing information, extract information

After the serial number is detected, it can ensure that the extracted information is accurate.

The size of the central cache is a key issue, too large buffer will cause waste of resources, too small, will result in strict

Heavy information is lost. And during the entire running process of the chat room, the cache size required for different periods is also different.

. Therefore, it is necessary to dynamically adjust the size of the buffer to adapt to different use intensities. Unfortunately, this time

IndoWS 2000 does not provide ready-made features, for this, an automatic adjustment mechanism is designed, according to each unit

The percentage of lost information is to determine the use of the buffer. If the buffer is too small, it must be very in the unit time.

Multi-covering, thereby causing more information loss, if the loss rate is above a certain threshold, the buffer should be added

Conversely, if the buffer is too large, the overlay does not appear. If the loss rate is less than a certain threshold, it should

This reduces the buffer. It can be imagined, as long as two thresholds are reasonably specified, the utilization of the buffer can be optimal.

.

For the total catalog, chat room descriptor and user descriptor, there is a common feature in terms of synchronous needs.

Taking the synchronization requirements of "single write, multiple reads". This is a typical synchronization requirement, in Jaffrey Richter

In detail in "Windows Core Programming", this implementation uses the routines. However, in the original routine

In the source code, the thread that has already been written is not supported, and the read lock is obtained, which will cause a deadlock. I modified the generation

Code to support this because it is used when the code is actually written.

With this locking mechanism, it is easy to implement message ID cache within the user descriptor. Message ID Cache

Is a mechanism similar to a central cache - cyclic use, automatic coverage, can guarantee information integrity. But with central campaign

Different, the message ID cache is inside the user descriptor, which is protected by the descriptor synchronization mechanism, completely

You can simplify its implementation using an existing synchronization mechanism. Simplification of implementation is mainly reflected in the integrity of information

The main thinking is to use a single-write multi-read synchronization mechanism to use: getting the thread of the read lock actually writes,

The thread that is written is read. Since there are multiple threads at the same time, you can use a similar center at this time.

The cache can ensure that the ID cache is looped, while the old cache item is automatically overwritten. When a thread wants to read the ID, as long as the write lock is obtained, it can guarantee that there is no other thread writes information in the buffer, and thus

A buffer serial number similar to a central cache is required. From the above analysis, IIS only allows each session to be asynchronously each time.

I / O, only when I / O occurs in the session, I need to read the message, so I actually only one thread is required at the same time.

To read the ID from the ID buffer, you can see that using a write lock is not reduced and concurrent.

The next question is that I / O sends information for the user is asynchronous, then how do you drive it to start sending? It is sending

What will I do after sending? When the tendency is just hung, check if there is information to send, if there is,

Perform asynchronously, and finally returns Pending. When the asynchronous transmission is completed, the callback function is called, it detects its just

Whether there is a message to send in the user descriptor, if there is, immediately start the next asynchronous send, so

Recipure until no message can be sent. At this point, the function must not be allowed to wait, it should return immediately to make it

Return the thread pool with threads. And when the new message arrives later, asynchronous transmission has stopped, the message cannot be sent

Go, the thread that needs to be written to the message ID to start asynchronous transmission. I think from the other hand, if you send it asynchronously

In the case, the thread written to the ID should not start it, otherwise there will be two asynchronous operations while occurring at the same session.

On, since the previous IIS limit, such an operation will result in an error. Therefore, the thread that must be written to the ID can be checked.

Whether the test has been suspended, it can solve this problem by operating a flag variable with an interlock function. This

When another predictable concurrency problem is that if the two threads are written to the ID buffer, they have found different

The step is sent has been suspended, so it is simultaneously to start asynchronous transmission, and the two asynchronous operations will exist at the same time.

There is also a need for a locked variable to enable these write messages to mutually exclusive, and set a thread to start asynchronous transmission.

Finally, in the process of specific implementation, a very important issue is to lock. Each step must be performed for each operation

Arrange the appropriate locking strength, more reasonably arrange the locking order between these steps. Due to the same moment, always

There will be multiple operations of different kinds of operations, if the lock order and intensity are improper, the performance is lowered,

Heavy lock locks. The reason for the death lock is often the logic error when designing, and once the deadlock occurs, its original

Because it will be difficult to detect. Thus, before writing the code, it must be carefully designed.

At this point, the design of the server is generally completed, as for the question, the question written in detail, such as the foreword saying,

Not a focus of this article. Related information can be found in the UML map in the appendix.

After the NS browser, a special type of data stream is supported. Its Type Description field is: "Content-Type: Multipart / X-Mixed-Replace; Boundary = Boundary", this non-standard data

Types support multiple parts of data, partial division, each part is the same content at different times

A snapshot. Using this data type, you may be able to be implemented with pure HTML. But only NS supports it, IE is not

hold.

Shortly after I finished the first edition of the chat room based on this theory, I found that the capital is online (www.263.net).

Similar PUSH technology (it is still useful when it comes to the text, it is only different in specific implementation. Its server is not NT, but a UNIX series. It does not use the way to embed Web services in a similar article, it seems to write

An additional service, listening to a port other than 80, then let the user connect to the specified port. This port is also

Perform an HTTP protocol, if the entire service is developed autonomously, the server-side implementation of the HTTP protocol should also be self

Written, thereby appearance, their development should not be small. But I don't think it's efficient than this article.

Currently.

Similarly, there is also a difference in the client implementation. Unfinished pages are placed in the UI part, that is, directly receive

The page pushing the page is visible, so it is pushed to the page information contains many HTML tags, actually

On, it even contains scripts. I don't know what benefits do, but it is certain that it is to transmit

The amount of information is large, and since the script is included, if the transmission is erroneous, the script will also perform an error, which is usually more than H.

The TML marker is more serious.

Although the chat room has been implemented, it is a pity that I can't put it when I am in a variety of aspects.

Deploy, I hope I can deploy it as soon as possible.

转载请注明原文地址:https://www.9cbs.com/read-88476.html

New Post(0)