Multi-person online game entry (ZT)

xiaoxiao2021-03-06  50

Original in: http://www.bookofhook.com/Article/gamedevelopment/multiPlayerProgramming.html

Introduction to MultiPlayer Game Programming Sunday, December 21, 2003

With the surge in popularity of multiplayer games such as Quake, Half-Life: Counter Strike, Everquest and Battlefield:. 1942, it is becoming increasingly important for game developers to support multiple players Unfortunately information on how to write multiplayer on-line games is .

One of the aspects of multiplayer game programming that makes it so confusing is the sheer number of topics and how they relate to each other. Code must be written that handles everything from high level game specific tasks all the way down to extremely low level tasks such As NetWork Packet Broadcast and Reception.

High level Architecture

Before delving into the nitty gritty low level aspects of implementing a multiplayer game, we need to take a step back and think about what we're trying to do from a higher level. Specifically, "sending the bits" is a different problem than " Which bits to send ", SO We'll Tackle The Latter Part First.

At some point with any multiplayer game one of the computers has to make the final decision as to the outcome of some action. For example, if two players are competing head to head, then you can not have a situation where Player A thinks he Killed Player B While Player B Thinks He Killed Player A.

The two most common architectures for resolution arbitration are client-server and peer-to-peer. Client-server is both conceptually simpler and easier to implement than peer-to-peer, so we'll cover thatfirst.Client-Server

In a client-server architecture all the players, or "clients", are connected to a central machine, the server. The server is responsible for all important decisions, managing state and broadcasting this information to the individual clients. A single view of the World is maintained by the server, Which Obviously Helps with Keeping Things Consistent.

As a result, the server becomes a key bottleneck for both bandwidth and computations. Instead of distributing the load among all the players, the server must do all the work. And, of course, it has to send and receive N independent streams of data , SO ITS Network Connection is Similarly Taxed.

Sometimes the server will be running on a player's machine as a "local server" or a "listen server". The rules still apply in this case, because the client and server are logically decoupled even if runningon the same physical system.

Peer-to-peer

A peer-to-peer system spreads the computational load out among all the players. If you have 8 players, each with a computer, then it's nice to leverage all the available computing power. The downside, of course, is that "computation" means "decision making", so cheating can become rampant (each client can be hacked to report results favorable to that specific player). In addition, the system is more susceptible to consistency errors since each peer has to make sure that it broadcasts its " decisions "and it must base this on the data provided by the other peers. If a peer falls off the network or does not get correct information in a timely manner, synchronization failures can and will occur since it's analogous to a CPU failing in a multiprocessor computer.The advantage of a peer-to-peer server is that overall bandwidth and computational requirements for each system are reduced, and you do not need a single beefy server responsible for managing the entire game.

Hybrid

In Reality, MOST Architectures Are Hybrid Systems as Going to an Extreme In Either Direction Can Lead To Significant Problems.

For example, in a true client-server system, the client would never move the player until the server responded with a "based on your last input, here is your new position". This is fine, assuming you have client-side prediction ( Discussed Later, But this Means That The Server IS Handling All Collision Detection. This Excessively Computationally Expensive, To The Point That's Not Tenable for lard.

A compromise would be to allow the clients to manage their own movement, and they in turn report their location to the server (which likely does some basic sanity checking on the reported movement). This leverages the computing power of each client and off loads a Tremendous Amount of Work from The Server. But Since The Client Is Now Authoritative About Something (Its Position), The Opportunity for Cheating IS Amplified.example: Pure Client-Server

. Pure client-server implementations abound, however they are not generally considered particularly advanced or "sexy" The best way to think of a pure client-server model is that each client is a dumb terminal - the only thing it does is transmit player INPUT to the Server and report Server message to the player.

The standard text MUD (multi-user dungeon) is a classic example of a pure client-server architecture In fact, you can play many (if not most) text MUDs using the dumbest terminal possible -. The telnet program.

The Server's Main Loop Would Would WOULD LOOK LIKE THIS:

While Not Done for Each Player IN WORLD EXECUTETETS SIMMAND TELL Player of The Results Simulate The World Broadcast To All Players

The Client's Main Loop Would Be Extremely Simple:

While NOT DONE IF PLAYER HAS TYPED Any Text Send Typed TEXT TO Server IF Output from Server EXISTS Print Output

..

Example: Pure Peer-to-Peer

Now let's look at a hypothetical pure peer-to-peer architecture. Imagine a tank game where two players can compete head to head. If such a game was peer-to-peer, you could say that each player is authoritative about his tank's position And State, and is also responsible for determining if the've hit The Other tank.since it's peer-to-peer

while not done collect player input collect network input about other players simulate player update player's state if informed it's been hit inform other clients if they've been hit move player update other player state (ammo, armor, etc.) report player state to other Clients

If you've ever dealt with on-line cheaters, alarm bells should be going off looking at the above. Specifically, since the client is responsible for simulation, any of those simulation related items can be cheated.

The first is updating the player's state if he's been hit - it would be relatively easy for a hacker to simply ignore incoming "damage packets" (this can be done by patching the client program or even by hooking up a packet filter to monitor incoming Network traffic.

The second is informing other clients they've been hit. It would, once again, be relatively trivial to tell each client they've been hit every frame. And since the other clients are explicitly trusting your client, they'll do the dutiful Thing and take the date.

The third is a simple one -. Movement Since the client is responsible for the player's state, it can be modified to make the player move at arbitrary speed or even teleport to random locations ( "hyperspace") Finally, even mundane things can. Be Hacked - Infinite Ammo, Infinite Armor, Infinite Powerups, or Even Something as Basic as The score.

The key issue here is that since each client determines a lot of what happens in the world, there are many opportunities for cheating. A client-server architecture can still suffer from cheating, but it would require either active participation on the part of the server Operator or fader Lightweight Cheating That Operates by Simulating a Perfect Player (AIMING BOTS) Or Providing the Player Nominally INACCESSIBLE Information (Heads Up Displays, Radar).

As a general rule, Peer-to-Peer Architectures Are Signify More Difficult To Secure Against Cheaters and Hacks.

Of course, not having a server does simplify some things, and no single machine is required to simulate the entire world. Network bandwidth for each individual machine gets worse since they have to communicate with all the other machines directly, but you do not have The Concentrated Network Bandwidth That Would Be Seeen At A Single Server.

Example: Hybrid System

A hybrid system combines aspects of client-server and peer-to-peer architectures. The idea is that by off-loading some of the work on the clients, each client in turn will enjoy better responsiveness (by not having to wait for the server Updates) and The Server Will Also Have ITS Workload Reduced.

For example, a massively multiplayer game may have the server be authoritative about combat and player statistics, but the client may end up authoritative for movement. Since movement can consume a lot of CPU cycles this will lessen the workload on the server, and at the Same Time Allow Smooth Movement On The Part of The Client. of Course, this Is Open to Hacking, Such as The Various Infamous "Speed ​​Hacks" .client-Side Prediction

If the server (or another peer) are authoritative about the world, there can be a significant amount of delay incurred as they broadcast new state information back to the client. This lag can be jarring, and in some cases completely impractical.

For example, in the case of a first person shooter, it's simply not feasible to have the client transmit "intended view changes" (i.e. moving the mouse) and then wait for a response from the server in order to present those changes locally.

So what you'll normally do is assume that the various movement or orientation changes have been propagated and returned from the server. Then, if the server decides that it's not allowed, you back up and correct.

This is especially important for movement, where you're running around a world and want it to feel smooth. The client can predict movement fairly accurately, and only in exceptional cases will you be corrected. Also, you can ease in corrections gradually instead of MAKING IT POP SUDDENLY, Red Cucing The Warping and Popping Artifacts.

Low Level Architecture

Now that we've sorted out the different types of high level architectures, we need to figure out how to do our low level communication architecture. The high level architecture determined how the different computers and elements of the game communicated with each other, and more important, which components were responsible for arbitration and decision making.The low level network layer, however, does not care about any of that. All it worries about is packaging up data and sending it across the wire (and, conversely, receiving data From Other Machines.

. Back in the good old days, there were many different and proprietary network transports Novell had IPX / SPX; Microsoft had NetBEUI; Apple had AppleTalk; and many other companies had their own proprietary systems By the early 1990s, however, the TCP /. IP (Transport Control Protocol / Internet Protocol) Protocol Eventually Won Out As The Low Level Networking Layer of Choice.

There are many good books and Web pages that describe TCP / IP, the OSI network stack, and other details that are not directly relevant to this discussion. For us, it suffices that TCP / IP is our network layer of choice. If for some bizarre reason you get stuck in a time warp and have to support LAN play circa 1995, then you'll have to investigate Novell's IPX / SPX protocol support under DOS. But if you're not stuck in said time warp, TCP / IP Is The Only Protocol You'll Need to Worry About for the ForseEable Future.

TCP vs. udp

TCP / IP is actually an umbrella term to cover TCP and IP and a host of protocols (TCP, UDP, and ICMP) that are layered on IP. The relevant part for us are the TCP and UDP protocols, and by association the IP protocol For relaying packets.

The Transmission Control Protocol, or TCP, is a high-level (to the application), connection oriented, reliable, in-order packet delivery system. This means that it logically thinks of one machine "connecting" to another, and that any traffic sent between the two is both guaranteed to arrive and, just as importantly, guaranteed to arrive in order in addition, TCP is a stream protocol -. distinct packets are not set, instead a constant stream of data is pumped from the source to the destination (underneath it's still packets, but to the application it looks like a stream of bytes) .The User Datagram Protocol, or UDP, is a more primitive set of features than TCP. It is a connectionless, unreliable datagram protocol. Computers send .

Structurally speaking, TCP and UDP are sibling protocols since they both sit right on top of the IP layer, but to the programmer UDP is a much lower level protocol since it provides a smaller set of features compared to TCP.

For this reason, TCP is more convenient than UDP, but that convenience comes at a cost. TCP can exhibit significantly lower overall performance than UDP because of the extensive error checking, handshaking, congestion control and acknowledgements that occur as part of a data transfer. with TCP, even if you have received a packet, the kernel / stack will witthold that packet until all previous packets have arrived - this means that while more up to date information is available, your code may never see it until the TCP stacks decides You CAN. This May Include A 3-Second Retransmit Timer in The Case of Lost Packets.as Tempting As Using TCP Sounds, Just Don't Do It. It Always, And i Mean Always, Leads to HeartBreak in The End For Real- .

For this reason, UDP will be our network transport of choice. We'll have to implement a lot of functionality on top of it that TCP provides, but by doing so we can tailor the performance and functionality exactly the way we want.

Sockets and winsock

Fine, we've determined that UDP is the protocol we want to use, but how do we access it Most modern operating systems features via APIs export system level (Application Programming Interfaces) Examples of common APIs include OpenGL;?. DirectX, GDI and Win32 (on Microsoft Windows); And Carbon On The Macintosh.

For our purposes, we're going to be using the sockets network API, also known as "BSD sockets". The BSD sockets API was developed in the early 80s to provide a reasonably portable method of accessing the TCP / IP subsystem (or " stack ") on the various Unix flavors available at the time. A variant, WinSock, was eventually developed for Windows.Functionally, WinSock and BSD sockets are extremely similar. In fact, code can often be shared between them with a few appropriate definitions and #ifdefs around some of the windows code. There is, Differences Between Winsock and Bsd Sockets, But by and large they're not signific..

UDP Concepts

AS Mentioned Earlier, UDP IS A Connectionless, Unreliable DataGram Protocol. Let's Break That Down To See We're Dealing with.

ConnectionLess

. UDP does not have a native concept of a "connection" One computer does not connect to another -.. Communication is not established then persisted for some duration of time Instead, communication is performed one packet, or datagram, at a time It's Like Mailing Letters To someone, you put an address on each envelope and send it off.

Unreliable

Unlike TCP, UDP does not guarantee that a packet sent from one computer to another will actually arrive. In addition, UDP does not guarantee that any packets received will arrive in the same order they were sent. As you can imagine, this is a somewhat inconvenient set of assumptions to deal with, so most games will either have to implement a reliable, in-order transfer mechanism on top of UDP and / or they will have to architect their high level protocols such that reliable, in-order data is unnecessary . I'll Talk about the latter a little bit lated.datagram protocol

UDP operates by sending discrete sized packets of data, or datagrams. This is different from a stream-oriented protocol such as TCP which sends data a byte at a time, leaving it up to the receiver to parse the byte stream. The proper size for A Packet Varies Depending on Various Factors, But A Good, Round Number That Many People Throw Out IS 1400 BYTES SINECE'S A BIT SMALLER THAN TYPICAL Ethernet MTU (Maximum Transmission Unit).

Addresses and ports

UDP packets are sent to a destination address consisting of an IP address and a port number (16-bit unsigned value). The destination system must be listening on that port (discussed later) in order to receive the packet.

Choosing a port

There are three ranges of port values ​​that any server author needs to be aware of: System (Well-Known) Ports (0 through 1023); User / Ephemeral (Registered) Ports (1024 through 49151); and Dynamic / Private Ports (49152 THROUGH 65535).

An application should never use port numbers assigned to the System Port range Dynamic / Private Ports are pretty much fair game, but you can not rely on the availability of a particular port So you'll probably end up doing what most people do.. - choose a port in the User Port range and just call it your own. On the off-chance that a user is running an application that allocates that same port, you should allow for the port value to be overridden or specified by the user.Using UDP

As much as I'd like to, I simply do not have the time to do a full UDP library implementation. There is plenty of information on actual socket and WinSock usage out there, so thankfully you're not at a loss when it .

However, I highly suggest using the very good ENet open source UDP networking library available here: http://enet.cubik.org It handles most of the nitty gritty UDP level stuff, letting the programmer concentrate on higher level, game specific architectural concerns .

Sockets

Under The Sockets API All Communication Goes THROUGH A Socket Object Allocated Using The Socket () System Call. A Socket IS Used for Both Sending and Receiving Data.

Sending data

ONCE You Have A Socket Allocated, you can begin sending data to an expected ip address and port combination using the sendto () API:

Int Sendto (Socket S,

Const char * buf,

Int Len,

Int Flags,

Const struct sockaddr * to,

int toolen;

The first parameter is the socket we've previously allocated with the socket () call. The second parameter is a pointer to a buffer of data we'd like to send. The third parameter specifies the number of bytes in "buf". The "flags" parameter specifies any special flags relating to this call. The "to" structure contains the destination address, and "tolen" is the size of the "to" buffer.This is where it gets a little ugly. Specifically, the whole "SockAddr" Situation IS A Bit of a Type Casting Nightmare, But it's nothing too oneerous overce you figureure it out.

Udp, ​​Your Destination Address Is Defined by a Special Structure Called A "SockAddr_in". Under Winsock, IT Looks Like this:

Struct sockaddr_in {

Short sin_family;

U_SHORT SIN_PORT;

Struct in_addr sin_addr;

Char sin_zero [8];

}

WE NEED TO FILL THAT INDTO () To Correctly Send Our Chunk of Data To The Destination. "SIN_FAMILY" is easy, and is set to AF_INET.

The "sin_addr" member is a little bit trickier. It needs to have a network ordered (big endian) network address. The easiest way to get this is to pass a string representing a standard IP address (eg "192.168.1.100") to The socket function inet_addr (). This will return a prot stucture. SOCKADDR_IN STRUCTURE.

Note that inet_addr () does not support domain name lookup (ie converting "foo.example.com"), it only handles "dot" format numeric addresses. If your application needs to support both dot and named IP addresses, the easy way To Handle this situation is to use gethostbyname () if inet_addr () fails:

Unsigned long

UDP_LOOKUPADDRESS (Const Char * Kpaddress)

{

Unsigned long a;

// Try Looking Up as a Raw Address First

IF ((a = = inet_addr (kpaddress)) == inaddr_none

{

// if IT Fails (isn't a dotted ip), resolve it

// THROUGH DNS

Hostent * Phe = gethostByName (Kpaddress);

// DIDN'T Resolve

IF (PHE == 0)

{

Return INADDR_NONE;

}

// IT Did Resolve, Do Some Casting Uglines

A = * ((unsigned long *) Phe-> h_addr_list [0]);

}

Return A;

}

Finally, We need to specify the destination port. This too hard, so we use the helper function htons () (Host-to-network-short) to Convert from host to network format.

Okay, Let's put this together in a Short Snippet of code for Filling in the sockaddr_in structure:

INT udp_fillsockaddrin (struct sockaddr_in * sin,

Const char * kpaddr,

Unsigned short port

{

// Zero Memory

MEMSET (SIN, 0, SIZEOF (* sin));

// set the family

SIN-> SIN_FAMILY = AF_INET;

// setur port - Use "htons" to convelectr from

// Host Endianes to NetWork Endian Byte ORDER

SIN-> SIN_PORT = HTONS (Port);

// set out our address

SIN-> SIN_ADDR.S_ADDR = UDP_LOOKUPADDRESS (KPADDR);

// Make Sure The Address Resolved Okay

IF (sin-> sin_addr.s_addr == inaddr_none)

Return 0;

Return 1;

}

Holy Crap, That's a Lot of Work. So Let's Tie All Together To Send A Buffer To Some Address Just So We Can See What The Big Picture Is Like:

Void SendSomedata (Const Char * Kpaddress, Unsigned Short Port,

Const void * KPSRC, INT NBYTES,

Socket S)

{

SockAddr_in sin;

IF (! UDP_FILLSOCKADDR (& Sin, Kpaddress, Port))

Return; // do better Error Handling, Obviously

Sendto (s,

(const char *) KPSRC,

Nbytes,

0,

(Const struct sockaddr *) & sin,

SIZEOF (SIN));

}

Sending Data (Implicit Destination)

The above snippets show how to send data to an explicit destination. H owever you can send data to in implicit destination by binding your socket to a specific address using the bind () call. Once you do that, you can now send data using the send () API without specifying an explicit destination each time This may seem like you're connecting to the destination, but do not be fooled, you're not -. you're simply telling your local socket that your destination address will BE IMPLICIT IN THE FUTURE.

Receiving Data

Data is received on a socket by calling either recv () or recvfrom () on that socket. The two APIs operate identically, except that recvfrom () will tell you the address of the packet's origination. This is handy when you want to respond to That packet and you don't know where it.

Blocking vs. non-blocking I / o

By default most sockets implementations are "blocking". This means that when asked to do something, they'll wait until they're done doing it. This is bad for a game (unless you're running your network pump in a separate thread , But We don't have the space to get ket.

Windows and BSD Sockets Have Slightly Different Ways of Enabling or Disabling Non-Blocking IO.

Under Winsock You Call IOCTLSocket ():

// enable non-blocking io

Arg = 1;

IOCTLSocket (S, Fionbio, & Arg);

However under BSD Sockets you Go Through a Slightly More Circuitous Route:

FCNTL (S, F_SETFL, O_NONBLOCK | FCNTL (S, F_GETFL));

Some Unix-Like Operating Systems Don't Have The Fcntl () Interface, AND On Those You Have To Use ioctl, Which Operates Similarly To Winsock'sinterface: arg = 1;

IOCTL (S, Fionbio, & arg);

Once a socket is no longer blocking, it will return a "soft" error in recv () / recvfrom () if there is no data waiting. It's important to check for this condition instead of just trapping it as an error condition!

Int err, result;

Char buf [2048];

While (1)

{

Result = RECV (S, BUF, SIZEOF (BUF), 0);

if (Result == Socket_ERROR)

{

Err = errno; / * under windows use wsagetlasterror () * /

IF (Err == EAGAIN) / * WSAWOULDBLOCK ON Windows * /

{

/ * No data waiting, not really an error * /

Sleep (10); // or break, or whatver

}

Else

{

/ * a real error occurred! * /

}

}

Else

{

/ * Do Something with data returned from rrom () * /

/ * SHOULD HAVE 'RESULT' NUMBER OF BYTES in BUF * /

}

}

How Many Ports?

Since An Application CAN Allocate Many Ports, A Common Question IS "How Many Ports Should I Use?" There Aren't Any Really Strong Arguments One Way Or Another, But it Basical Breaks Down Like this:

using multiple ports can simplify demultiplexing incoming packets. For example, you might assign one port per client, and thus assume all incoming traffic on that port is from that one client. Or you might have a port dedicated to "control" data and another port dedicated to "state" data, etc. This simplifies things moderately, but at the expensive of creating and maintaining more ports. and you'll still have to verify the validity of each packet, so ifyou're going to be doing packet inspection anyway , manual demultiplexing will not cost much more. each port will usually have its own set of buffers within the operating system. For this reason, if you find that you are accumulating a lot of network traffic, it might make sense to use more ports simply to have more buffer space. the more ports you need, the higher the likelihood you will conflict with an existing port used by another service. you will also use up more system resources, and will also have to open / forward more ports on your fly IREWALL or NAT Box.The Simplest Thing Is To Use A Single Port and Demultiplex Packets Based on Their Source Address and / OR PayLoad.

UDP TCP

One commonly floated suggestion is "Why not use UDP for unreliable data and TCP for reliable data?" At first glance this seems like a genuinely good idea, however in practice this falls apart. Because of the performance characteristics exhibited by TCP on flaky network connections , IT Will Tend to Run "Skewed" from the udp traffic. This Means That The Two Different Streams May End Up Out of Phase with Each Other, OFTEN TIMES CAUSING VERY ODD BEHAVIOUR in Game.

Just Use a reliable system over udp like eNET and Ditch TCP altogether. Really.

Network Address TranslationBecause of the limited allocation of unique IP addresses to businesses and individuals, there has been a sruge of popularity of "network address translation" routers, also known as "NAT boxes".

The theory behind a NAT box is fairly simple -. It takes a single, public IP address and multiplexes it to multiple internal computers with private IP addresses It accomplishes this by keeping track of incoming and outgoing traffic and doing the appropriate address translations on the Fly. It's Conceptually Simple, However It Does Have A Set of DrawBacks When DEALING with GAMING.

I'm Not Going to Get Into Much Detail on How Nat Works - There a Ton of Good Articles On this Available On The Net - But I Will Discuss BRIEFLY How It affects a multiPlayer game.

. The first, and most obvious, problem is that a game hosted behind a NAT box needs to have its "listening" port forwarded This is pretty standard stuff, nothing magical there - you just have to make sure that servers behind a NAT or FireWall Know to configure Their Network to Forward Traffic On That Port to the Server. in a peer-to-peer system, Each Player's Machine Must Also Have a port Opened to Accept Traffic.

The second problem is that a game can not assume that all traffic coming from a single IP address is from the same computer (since one public IP may be shared by multiple computers) Instead, it must examine the incoming packet's source address -. Including port .

The third problem - and a fairly rare one at that, but which is insidious for this very reason -. Is that some routers / NAT boxes actually change the port they're forwarding dynamically Since UDP is a connectionless protocol, technically there is no reason for them not to do this, but the practice is rare enough that many developers rely on the assumption that a player's initial IP:. port source address will remain constant during the life of a game Unfortunately, this is not so.This Can Cause Problems Is if A Game Server Binds Client Information To a Source Address.

For Example, Biff Tries To Connect To A Game's Server. His Source Address (Via Nat) May Be 76.54.32.10:4567. Your Game Says "Okay, This Is Fine, We know That Biff is Always AT 76.54.32.10:4567" .

.

. Now Charlie connects to your game, and he's playing behind the same NAT box as Biff His address is 76.54.32.10:6789 (reminder: IP via he has the same IP address since they share an the NAT box) Once again, no. Problem, WE Keep Cruising Like We Did Before.

But then the NAT box, for some weird reason, decides it's going to translate Biff's address a little different. All of a sudden it decides that Biff's source address is going to be 76.54.32.10:9876. In a connectionless protocol, this is fine , since in theory a UDP-based server does not understand the notion of a "connected" client -. all incoming UDP packets are supposed to be responded via the source address returned from recvfrom () Unfortunately, that's not how a game works .

So your game server gets a weird request from 76.54.32.10:9876, a source address that does not map to any known clients -. An error condition Not only that, but suddenly Biff does not seem to be connected anymore.Detecting THIS CONDition IS A ROYAL PAIN IN THE ASS, AND CAN LEAD TO Nightmarish Support Problems Where A Very Small Fraction Players Complain About Getting Intermitty Disconnected For No Apparent Reason.

One way to solve this is to avoid persistently processing packet data based on the source address port value. Instead, each client should send a unique client ID that can be used to differentiate between multiple clients on a single IP address. I use a 16- Bit Unique Identifier Based On The The Lower Two Bytes of The Client's Private IP Address. for Example, 192.168.1.10 Would Generate A UNIQUE Client ID of (1 << 8) | 10, or 0x010A.

Note: You could theoretically use the lowest byte only and still work in most situations (homes and small businesses), but larger networks with larger (two-byte) private IP ranges would pose a potential source of conflict For example, if Biff was. ON 192.168.1.66 and charlie Was on 192.168.2.66 Then a system That Only Lood at the low of Differentiate Between The Two Clients.

Encryption Note: If the incoming packets are session key encrypted, you obviously can not encrypt the client ID since you will not be able to decrypt it In these cases, you'll probably have a packet structure where the first 16-bits. Are Clear Text with the client ID, Which can life be used to look up the client's session key which is in turn used to decrypt the remain of the payload.

Reliable / in-Order Delivery

Let's say you refuse to use ENet and instead want to do your own reliable, in-order protocol. I'll try to briefly describe a simple way to implement this.The first thing you'll have to do is tag your outgoing packets with A sequence number. this value increments for each packet thing goes out on the wire to a particular destination.

When a packet is sent for reliable delivery, you prepend the header (with sequence number) on the packet, then you store this packet in a buffer somewhere. When the packet arrives at the destination, the receiver needs to send an acknowledgement back to the sender saying "I received packet XYZ" (in reality this is fairly inefficient, and what you'll really want to do is just send back "the most recent in-order packet" you received, typically piggy backed on other data).

Until The Sender Receives That Ack, IT Will Continue To Periodically Resend That Packet. When the Ack Arrives, The Copy of the Outgoing Packet is Deleted from The Buffer.

Pretty simple stuff, however it does not take into account the issue of in-order delivery. Thankfully the sequence number handles this for you as well. When the receiver accepts a new packet, it simply checks the sequence number against the last sequence number Received from That Source, and if The Sequence Number is Less Than or Equal to the last one, it just ignores it.

IF The New Packet's Sequence Is Equal To The next Expected Sequence Number (Last SEQUENCE 1), The IT SEQUENCEPTED AND THE Sender is Notified That The Packet Arrived Safely.

If the new packet's sequence is larger than the expected sequence number, you can do one of two things. The simplest thing to do is ignore it and wait for a resend from the server, however this is not very efficient. Instead, what you ' ll want to do is buffer that packet for some period of time, hoping that the necessary prior packets eventually show up. When they do, you can then unbuffer the "newer" packets, thereby minimizing the amount of latency incurred for a missed packet. The Rate At Which You Resend BE Calculated Based on The Mean Round-Trip-Time (RTT) Between The Sender and Receiver. This allows you to back off on redeliveries as connections experience packet loss, etc.

CONNECTION QUALITY

One of the Things That Players Will Want To Know Is The "Quality" of their connection to a particular server. This is usually quantified in some form as latency / lag and packet loss.

Latency is the amount of time it takes to reach a server from a particular client. Longer latency means slower responsiveness. Packet loss is usually expressed as a percentage of packets that are dropped between the client and server, which manifests itself as little hiccups during play AS DATA HAS TO BE RESENT.

Measuring latency is trivial. On each outgoing packet, simply embed the time the packet was sent as part of the header. When the packet is received, the receiver sends that time back as part of the acknowledgement, so when the sender receives that acknowledgement it CAN Look at the Sent Time and Compare It to the WallClock To Get An Idea For How long The Round Trip Takes.

This requires slightly more data (the timestamp) per packet, so some people prefer to issue several ping commands per second to measure the on-going latency.Packet loss can be measured in several different ways, most of them purely arbitrary - pick a And Stick with it.

Collating Data Into Large Packets

Every UDP packet that goes out is encumbered by a 22-byte private UDP header (stuck on there by the transport layer) that tells everyone between it and its destination where it's trying to go and how big it is. That's a pretty big chunk of Space to send out for every packet.

One early optimization that many do is to collate packets into larger packets so that you're not constantly sending these headers all the time. They usually collate up to the "ideal" transmission size, which changes but, empirically speaking, seems to hover right Around 1400 Bytes.

Obviously you do not want collate for too long, since you can easily incur software induced lag as a result, so you should enforce a broadcast at some interval, eg every 100ms. Something else to consider is that users with modem connections may find that a 1400 byte packet takes far too long to broadcast (@ 56k, with a usable speed of 33k, it should take about 300ms to transmit a 1400 byte packet in a best case scenario). If modem connections are important, then a smaller buffer size Of, Say, 500 Bytes Might Be More Approrate To Reduce Latency.

Fragmentation / ReassemblyMbly

The inverse problem to small, inefficient packets is that of needing to transmit large data chunks that do not fit inside the magical 1400 byte "ideal" MTU size As a convenience it's nice to provide packet fragmentation and reassembly -. This makes the application programmer's life significantly easier.How you implement this is up to you, but one direction to explore is to have "sub-sequence" nubmers inside your header, which indicate if the packet is actually part of a larger packet. When a receiver sees these , IT Knows to Just Grab Those Packets and Reassemble The Data As It Arrives In Fragments.

Occasional polling vs. Separate Thread

Since data needs to be resent in order to ensure reliable delivery, and since you'll also want to see if new data is coming in so that you can send back reception acknowledgements, you'll need to pump your network code periodically.

One way to do this is to make sure you pump it every N milliseconds, for example at the top of your game's main loop. This is the simplest approach, and in fact the one I suggest for most situations, however it does have the tendency of being hiccup prone if a game loop suddenly takes an unexpectedly long duration (eg synchronously loading large assets from disk) (SOLUTION:. do not perform any operations in your main loop that can halt the system for extended periods of time, since that Will often Screw Up More Than Just Your Network Pump.)

A more complex, but more predictably timed, alternative is to have your network code running in a separate thread. Then you can sleep () and / or block pending new events, unaffected by the actions of the main loop. You'll then have to contend with race conditions and deadlocks, so designing your locking system well is critical. The complexities of doing this are not to be underestimated, so only go this direction if you firmly know what you're doing. In the vast majority of cases Simply Pumping your sockets overce per "frame" Works Well Enough.comPression and Bandwidth Management

NetWork Bandwidth Can Be Consumed Very Quickly, Both on The Client Side (MODEMS) AND ON The Server Side (Limited Pipe). As A Result, Data Compression is Important.

From an architetural point of view, the very first thing you should do is minimize the amount of data that you need to send, period. Before any type of fancy compression schemes, ensure that only the data a client needs is actually received. If you have a large world, it is impractical to broadcast the status of all entities to all clients -. bandwidth usage will be beyond comprehension Each client should have a specific "area of ​​interest" along with some general state, and that is all the server should broadcast to that client. There is no need for someone fighting an orc in the Dungeon of Gloom to receive information about a conversation between a barkeep and patron at the Inn of Happiness six kilometers away.

Once you've determined the subset of relevant information that must be broadcast, you'll want to compress the actual data crossing the wire before the networking library even sees it. This means truncating values ​​that do not need to be so large, etc . for example, if you do not truly need floating point position elements and can live just fine with 16-bit fixed point values, then use that. If you can live with 8-bit angle representations, then send bytes for that information, etc. If you have multiple bit flags, try to put them together into unused areas of your packet header.Truncation and quantization should give some pretty decent compression, at least 25-50%. The next step is then to compress that data within the Packet. ONCE AGAIN I'll Bow Out of any Really Detailed Discussion, Other Than To Say - There's a Lot of Information Out There About Data Compression. This SHOULD BE Worth A Few More Percentage Points.

The third step you'll want to take is delta compression, or sending only changes instead of absolute state. For example, say you send over 12-bytes of data to represent position information. If you do that every frame, even if the player ISN't Moving, That's Wasteful. It is much.................... ..

Delta compression is done by keeping a copy of the local state and the last state that was successfully received by the other machine, and sending only the changed values ​​and notating which values ​​are being sent through a "delta flags" header.

Security and encryption

ONE OF THE BIGGEREST Concerns with on-line gaming has to do without cheater can ruin the game for hundreds or even thousans of other placeers.cheating

CHEATING CAN OCCUR WHENEVER Players Levee Information The Should Not Have OR, Even Wension The Game Because Their Client Is Authoritative in Some Area.

Access to ostensibly limited information is one of the easiest hacks someone can perform. This takes advantage of the fact that many games send more information than the client is required to know, and have the client filter this information out appropriately.

For example, certain players may be invisible. This information might be sent over to the client as "entity X is invisible, so do not draw him". By hacking or modifying the client, the cheater can now simply say "ignore invisibility flags And Know Where Invisible Players Are Located.

Players may also use available information in order to build client-side assistants, or bots. For example, since a player's heading is rarely sent to a server constantly (it changes too frequently), the server must ensure that the player's client has a full Set of Relevant Information for His Entire Surrounding Area. Someone Can Write A Bot That Displays A Radar of The Player's Immediate Surroundings In That Type of Situation, Providing a huge Tactical Advantage.

Finally, there's the example where you have a client that is authoritative in some area, such as player movement. In those situations the player, with a suitably modified client, can pass outlandish values ​​back to the server, giving him the ability to fly or run at ridiculously high speed. This is a serious problem for on-line worlds where there is a strong emphasis on game balance and perceived fairness. Ideally this is beaten by simply disallowing the client to be authoritative about anything, but sometimes this is not a practical approach.If the player is cheating by using a hacked client, ie the client itself has been modified, then there's not much you can do short of a bunch of checksums on the executable, etc. But this is a losing proposition - If SIMEONE HAS Soft-Ice OR A Similar Debugger Hooked Up To your program, you're going to lose in the long run.

Slightly easier to beat is when someone has installed a proxy, or a program that is intercepting network traffice between the server and the client. Proxies rely on knowing the format and contents of the packets sent between the client and server, so if you can remove This Knowledge, You're Part of the Way there.

Which means encryption.

ENCRYPTION

Encrypting the traffic between a server and client is extremely important, not just for cheating, but for basic security. For example, if a player is sending personal information such as credit card numbers or passwords in cleartext, then someone using a simple packet sniffer in Proximity to That Player Will Be Aable To Read this Information TriviL.

Even ignoring the issue of privacy, there's the problem of cheating. Cleartext packets are easy to examine and reverse engineer, so encryption is important to deter cheating via proxies / sniffers.For our purposes, there are two key forms of encryption that we need to BEMMED ABOUT - SYMMETRIC Key Encryption.

Symmetric key encryption works by sharing a single key between the client and server. Most symmetric key algorithms, such as Blowfish, are significantly faster than asymmetric key algorithms by several orders of magnitude.

Asymmetric key encryption no longer shares a single key. Instead, each party has a key used to encrypt and decrypt information. Possession of one key does not allow you to decrypt messages encrypted with that same key. Asymmetric key algorithms such as PGP and RSA are Extreme EXPENSIVE AND ANDETER NOT SUITABLE for Packet Encryption.

Ideally Everything is Encrypted with a Symmetric Key Algorithm, But That Presents A Significant Problem - How do you exchange the key

One mechanism is to somehow generate the key based off of client data known to the client and server both, for example the client's name and address. Both sides can generate an identical symmetric key based on this without having to exchange it in cleartext.

This works fine when trying to avoid packet sniffers, but it does not prevent the situation where someone is hacking the client directly, finds the key value, then stores it in the proxy so that it, too, can decrypt the packet stream. In addition THIS Requires a Priori Knowledge of Client Details on The Part of the Server, AND for non-Persistent Games That Don't Require Registration this May Not Be Feasible.

So that's where asymmetric encryption comes in. With an asymmetric system, the server has a private key and a public key. The public key is well known to all clients. So the client randomly generates a per-session symmetric encryption key, encrypts with the server's public key, and sends that over.Nothing in between the server and client can decrypt this data, since it would require knowledge of the server's private key, which is effectively under lock and key. When the packet arrives, the server decrypts it with .

This provides a good balance of security and performance. In theory the session key can still be retrieved, but it would require an actively hacked client instead of just hacking the client (or its data files) a single time and storing the key. It's far From Fool Proof, But it raise the bar significantly.

Portability Issues

When dealing with networked multiplayer games, there is a chance (if you support it) that you will be exchanging information between heterogenous systems. For example, your servers may be running on Sparc / Solaris, but your clients might be on x86 / Windows.

Most issues to do with portability apply just as strictly when dealing with the network as they do with data files. Just like persisting data to disk, you have to be aware of size, endianess and alignment issues when persisting data to the network. In addition Keep In Mind That Some Areas - Such As Floating Point - May NOT Evaluate The Same on Two Systems With Different Processors Or That Had Clients Compiled with Different Compilers.

Because endianess plays such a key role in network communications (and UDP is big-endian internally), there are several socket library functions designed to convert from "network" endianess (big-endian) to host endianess These functions are:. Ntohl () - NetWork-to-Host Byte Ordering, Long (32-BITS) NTOHS () - Network-to-Host Byte Ordering, Short (16-Bits) HTONL () - Host-to-Network Byte Ordering, Long (32-bits) ) htons () - Host-to-network byte Ordering, SHORT (16-BITS)

For A More Robust Solution, I Would Recommend That You Higher Level, Using Your Own Macros / Functions. Alternative, You CAN Look at The Posh Headers.

Summary

Multiplayer network game programming is an extremely complex and daunting task, because it's difficult to learn "just part of it". There are so many interrelated issues that without a gestalt understanding of the entire situation, it almost seems like an impossible task at first.

The goal of this document is to provide a high-altitude view of how the different pieces in a networked game environment come together While no actual source code is provided, that is rarely going to be the difference between success or failure -. Understanding the Key Topics Is Really The Important Part, Because The Code Can and Will Change Depending on your design criteria.

Contributors

THANKS TO JEREMY NOTZELMAN, BRUCE MITCHENER, JON WATTE AND JONATHAN BLOW FOR feedback and comments. Thanks to Gavin Doughtie for Pointing Out Some Grammatical Errors.

Resources

Stevens, W. Richard, "Unix Network Programming, Vol. 1", Pearson EducationNet, http://enet.cubik.org

转载请注明原文地址:https://www.9cbs.com/read-71406.html

New Post(0)