Linux Netfilter implementation mechanism and expansion technology

xiaoxiao2021-03-06  98

Linux Netfilter implementation mechanism and expansion technology

content:

1. IP packet flowing2. Netfilter Frame3. Netfilter-iptables extensions. Case: Implement VPN References with Netfilter About the Author About this article

In the Linux area:

Tutorial Tools & Product Codes and Component Project Articles

Yangshazhou (Pubb@163.net) University of Defense Science and Technology Computer College October 2003

In this paper, since the flow of mobile process in the Linux network protocol stack, this paper has been in-depth analysis of the most popular firewall building platform in Linux 2.4.x core, focusing on how to apply extensions in the Netfilter-iptables mechanism, and At the end of the article, a solution using extended Netfilter-iptables implementing VPN is given.

2.4.X The kernel has a relatively large change in 2.2.x in the IP protocol stack, Netfilter-iptables is a major feature, because it is powerful, and is perfectly combined with the kernel, so it will become a network under the Linux platform. Application extension main tools, these extensions include not only the implementation of firewalls - this is just the basic features of NetFilter-iptables - Also included in various packet processing work (such as packet encryption, message classification statistics, etc.), and even by means Netfilter-iptables mechanism to implement virtual private network (VPN). This article will be dedicated to in-depth analysis of the organization structure of Netfilter-iptables, and describe how to expand it. Netfilter is currently implemented in ARP, IPv4, and IPv6, taking into account IPv4 is the mainstream of current network applications, this article only analyzes IPv4 Netfilter implementations. To understand the working principle of Netfilter, you must start from the analysis of the Linux IP packet processing process, Netfilter is constructing ourselves in this process. 1. The IP Packet FlowingIP protocol stack is the main component of the Linux operating system, and is one of the features of Linux, and it is highly stable. Netfilter and IP protocol stack are closely combined. To understand NetFilter's work mode, it must be understood how IP protocol stack processes packets. The following will briefly introduce the structure and packet processing of the IPv4 protocol stack (IP layer) through a flow path transmitted via IP Tunnel. IP tunnel is a 2.0.x kernel has provided virtual LAN technology, which creates a virtual network device in the kernel, encapsulating normal packets (second layer) in IP packet, and then pass TCP / IP network Transfer. If IP Tunnel is established between the gateway, the virtual LAN can be implemented in conjunction with the resolution of the ARP packet. We started from packets to IP Tunnel devices. 1.1 When the message sends an IPIP module to create a tunnel device (the device name TUNL0 ~ TUNLX), set the packet sending interface (hard_start_xmit) for IPIP_Tunnel_Xmit (), flow see the following picture: Figure 1 Packet Send Procedure 1.2 Packet Receive Packet Receive Starting from the NIC driver, when the network card receives a message, an interrupt is generated, and the interrupt service in the driver will call the determined reception function. The following still use IP Tunnel packets as an example, the NIC driver is DE4X5. The process is divided into two phases: the driver interrupt service program phase and IP protocol stack processing phase, see below: Figure 2 Packet Receive Procedure Driver Phase 3 Packet Receive Procedure The stack stage If the message needs forwarding, The ip_forward () is called at the red arrow above. 4 The packet forwarding process can be seen from the above flow, Netfilter appears in the form of packets in the form of NF_HOOK (). 2. Netfilter FramenetFilter is 2.4.x core introduced, although it provides compatibility of IPFW and 2.2.x kernels in the 2.0.x kernel, but in fact it works more than this. As can be seen from the process analysis of IP packets, the processing of NetFilter and IP packets is completely combined, and due to its relatively independence, it can be completely stripped. This mechanism is also one of Netfilter-iptables that are both efficient and flexible.

Before analyzing the Netfilter mechanism, we are still starting from the use of Netfilter from the shallow depth. 2.1 Compile in Networking Options Select the NetWork Packet Filtering item and sets all the options under the IP: Netfilter Configurations section below Module mode. Compile and install the new kernel, then restart, the Netfilter's Netfilter is configured. The following explanation of the relevant kernel configuration options can also be found on the Help: [Kernel / User NetLink Socket] to compile the system to establish a class of PF_NETLINK sockets for core and user processes. When Netfilter needs to use the user queue to manage certain packets; [Network packet filtering (replaces ipchains)] Netfilter master option, providing Netfilter framework; [Network packet filtering debugging] Netfilter master option branch, support More detailed NetFilter report; [IP: Netfilter Configuration] This section is a collection of various options for Netfilter: [Connection Tracking (Required for Masq / NAT)] Connection Tracking, for connection-based packet processing, such as NAT; [Ip Tables Support (Required for Filtering / Masq / NAT)] This is a container for Netfilter's framework, NAT and other applications; ipchains (2.2-style) Ipchains mechanism compatible code, IPchains on new Netfilter structures Interface; ipfwadm (2.0-style) 2.0 kernel firewall IPFWADM compatible code based on new NetFilter implementation. 2.2 Overall Structure Netfilter is a series of call portions embedded in the kernel IP protocol stack, setting up on the path processing. Network packets can be divided into three categories: inflow, flowing, and flow, where flowing and flowing, can distinguish, while flowing through and out of packets need to be delivered In addition, there is a forward process that flows through the message, ie, to another NIC from one NIC. Netfilter is based on the flow of the network packet, inserted in the following point insertion processing: nf_ip_pre_routing, executed before the packet is previously executed; NF_IP_FORWARD, before the message is turned to another NIC; NF_IP_POST_ROUTING, before the message flows out; NF_IP_LOCAL_IN After flowing into the local packets, it is performed after the route; NF_IP_LOCAL_OUT, the flow of outflows in the local message is executed. As shown in the figure: Figure 5 Netfilter Hook Location Netfilter Framework provides a similar hook (hook) for a variety of protocols, with a struct list_head nf_hook [nproto] [nf_max_hook] 2D array structure storage, one-dimensional as a protocol, Two-dimensional invoking entrances mentioned above.

Each module that is desired to embed Netfilter can register multiple hook functions for multiple call points of multiple protocols, which will form a function pointer chain, each protocol stack code executes an NF_HOOK () function. When (there are multiple timings), all of these functions will be launched, and the protocol stack content specified by the parameter is processed. Each registered hook function will return one of the following values ​​after processing, inform the NetFilter core code processing result to take the corresponding action: nf_accept: Continue normal packet processing; NF_DROP: Packets discarded; NF_STolen : Processes the message by the hook function, do not continue to transfer; nf_queue: Trium-entered the team, usually handled by the user program; NF_REPEAT: Call the hook function again. 2.3 iptablesNetfilter-iptables consists of two parts, part of the NetFilter's "hook", and another set of rules know how these hook functions work - these rules are stored in the data structure called iptables. The hook function determines by accessing iptables to determine what value should be returned to the Netfilter framework. Three iptables: Filter, NAT, and Mangle have been built in existing (Kernel 2.4.21), and most of the packet processing functions can be completed by filling the rules in these built-in forms. : FILTER, the function of this module is filtered, or no modification, or accept, or rejected. It registers hook functions in NF_IP_LOCAL_IN, NF_IP_FORWARD, and NF_IP_LOCAL_OUT, that is, all messages will be processed by the FILTER module. NAT, Network Address Translation, the module is based on the Connection Tracking module, only matches and processing the first packet of each connection, and then applied to the Connection TRACKING module to apply the processing result to the connection. All messages. NAT registers the hook function in NF_IP_PRE_ROUTING, NF_IP_POST_ROUTING, and if necessary, can register the hook at both NF_IP_LOCAL_IN and NF_IP_LOCAL_OUT, providing address translation to local packets (out / into). NAT only modifies the address information of the newspaper, and does not modify the content of the message, according to the modified part, NAT can be divided into two categories of source NAT (SNAT) and destination NAT (DNAT), the former modified the first message The source address, while the latter modifies the destination address portion of the first message. Snat can be used to implement IP camouflage, while DNAT is the implementation basis for transparent agents. MANGLE, which belongs to IP Tables that can be modified by message content, which can be modified, including Mark, TOS, TTL, etc., and the operation function of the Mangle table is embedded in NETFILTER NF_IP_PRE_ROUTING and NF_IP_LOCAL_OUT. The kernel programmer can also create new iptables by invasing the NetFilter interface function by injection modules. In the following NetFilter-iptables applications, we will further contact the structure and way of use of Netfilter.

2.4 Netfilter Configuration Tool iptables is a NetFilter that is specifically designed for the 2.4.x kernel. The Netfilter is operated by the Socket interface. The way to create a socket is as follows: socket (tc_af, sock_raw, ipproto_raw) where tc_af is AF_INET. The nuclear procedures can be read by creating a "original IP socket" to access the NetFilter handle, and then read by getSockopt () and setsockopt () system calls, change the Netfilter settings, see below. The iptables is powerful and can operate on the table, which mainly refers to the addition, modification, clearance of the rule chain, and its command line parameters can be divided into four categories: specified IP tables (-T) Specify the operations (-A, -D, etc.) that are made to the table; rule description and match; instructions to the iptables command itself (-N, etc.). In the following example, we boot the TCP connection of the 53-port (DNS) of 10.0.0.1 through iptables to the 192.168.0.1 address. iptables -t nat -a preording -p tcp -i eth0 -d 10.0.0.1 --dport 53 -j DNAT - TO-DESTINATION 192.168.0.1 Since iptables is the user interface of Netfilter, sometimes Netfilter-iptables It is simply referred to as iptables so as to bring along the old version of Ipchains, Ipfwadm and other older versions. 2.5 iptables Core Data Structure 2.5.1 Table In the Linux kernel, iptables are represented by struct pt_table, defined as follows (include / linux / netfilter_ipv4 / ip_tables.h): Struct ipt_table

{

Struct list_head list;

/ * Chain * /

CHAR NAME [ipt_table_maxnamelen];

/ * Table name, such as "Filter", "NAT", etc. In order to meet the design of the automatic module load, the module containing the table should be named iptable_'name'.o * /

Struct ipt_replace * table;

/ * Table mold, initial initial_table.repl * /

Unsigned int valid_hook;

/ * Bit vector, indicate the hook * /

RWLOCK_T LOCK;

/ * Ready-write lock, initial in the open state * /

Struct ipt_table_info * private;

/ * iptable data area, see below * /

Struct Module * ME;

/ * Is it defined in the module * /

}

Struct ipt_table_info is the data structure of the actual description table (NET / IPv4 / Netfilter / IP_Tables.c):

Struct pt_table_info

{

Unsigned int size;

/ * Table size * /

Unsigned int number;

/ * Number of rules in the table * /

Unsigned int initial_entries;

/ * The number of initial rules is used for module count * /

Unsigned int hook_entry [nf_ip_numhooks];

/ * Record the affected HOOK rule inlet relative to the offset of the following Entries variable * / unsigned int underflow [nf_ip_numhooks];

/ * The rule table on the hook_entry is limited. When there is no rule, the corresponding hook_entry and underflow are 0 * /

Char entries [0] ____cacheline_aligned;

/ * Rule table entry * /

}

For example, the built-in Filter table is initially defined as follows (NET / IPv4 / Netfilter / IPTable_Filter.c):

Static struct ipt_table packet_filter

= {{NULL, NULL}, // Link list

"filter", // table name

& Initial_Table.Repl, // Initial Table Template

FILTER_VALID_HOOKS, / / ​​Define ((1 << nf_ip6_local_in) | (1 << nf_ip6_forward) | (1 << NF_IP6_LOCAL_OUT), is concerned about INPUT, Forward, Output

RW_LOCK_UNLOCKED, // Lock

Null, // The initial table data is empty

This_module // Module Tag

}

After calling ipt_register_table, the private data area of ​​the Filter table is filled with reference templates. 2.5.2 Rule Rules use the Struct IPT_Entry structure representation, including matching IP headers, a Target and 0 or more Match. Since the number of matches is uncertain, the actual footprint of a rule is variable. The structure is defined as follows (include / Linux / Netfilter_IPv4):

Struct ipt_entry

{

Struct ipt_ip ip;

/ * IP header information of the message you want to match * /

Unsigned int NFCache;

/ * Bit vector, indicate what part of this rule cares about the packet, not used * /

u_INT16_T TARGET_OFFSET;

/ * Offset of the Target area, usually after the Target area is behind the Match area, while the MATCH area is at the end of IPT_ENTRY;

Initialized to Sizeof (Struct ipt_entry), ie assumed no match * /

U_INT16_T NEXT_OFFSET;

/ * The next rule is offset relative to this rule, that is, the sum of space used in this rule,

Initialized to SIZEOF (Struct ipt_entry) sizeof (struct pt_target), ie there is no match * /

Unsigned int ComeFrom;

/ * Bit vector, mark the HOOK number of this rule, can be used to check the validity of the rule * /

Struct pt_counters counters;

/ * Record the number of packets and total number of packets issued by this rule * /

Unsigned char elems [0];

/ * Target or the starting position of Match * /

}

The rules are placed in the area after the hook point that is concerned, and is arranged in the area after Struct pt_table :: private-> entries. 2.5.3 Rule Fill in the process After understanding the data structure of iptables in the core, we will learn how these data structures work by traversing a user through the iptables configuration program. One of the simplest rules can be described as rejecting all forwarding packets, using the iptables command is: iptables -a forward -j drop;

The iptables app converts the command line into the program-readable format (iptables-standalone.c :: main () :: do_command (), then call the IPTC_Commit () function provided by the libIPTC library to the core submitted the operation request. IPTC_Commit () (ie TC_COMMIT ()) is defined in libiptc / libiptc.c, which sets an Struct ipt_replace structure based on request to describe information such as the table (FilWard) involved in the rule, and Thereafter, the current rule - a struct pt_entry structure (actually can be multiple rule entry). After organizing these data, IPTC_Commit () calls setsockopt () system call to start the core processing:

Setsockopt

SockFD, // Create a socket created by socket (tc_af, socket_raw, ipproto_raw), where tc_af is AF_INET

Tc_ipproto, // ie ipproto_ip

SO_SET_REPLACE, / / ​​ie ipt_so_set_replace

REPL, / / ​​STRUCT IPT_REPLACE Structure

SIZEOF (* REPL) (* Handle) -> Entries.size) // ipt_replace plus the following IPT_ENTRY

The core of the core for setsockopt () is driven from a layer of layer in the protocol stack. The calling process is shown below: Figure 6 Rule Fill in the process NF_SOCKOPTS is a struct nf_sockopt_ops structure generated by the nf_register_sockopt () function when IPTables performs initialization. For IPv4, an IPT_SOCKOPTS variable (Struct NF_SOCKOPT_OPS) is defined in Net / IPv4 / Netfilter / IP_TABLES.C, where the SET operation is specified as DO_IPT_SET_CTL (), so when the NF_SOCKOPT () calls the corresponding SET operation, control Transfer to NET / IPv4 / NetFilter / IP_TABLES.C :: DO_IPT_SET_CTL (). For IPT_SO_SET_REPLACE command, do_ipt_set_ctl () call do_replace () to process the incoming user layer function struct ipt_replace struct ipt_entry tissue and to filter (struct ipt_replace :: name according to item) hook_entry table [the NF_IP_FORWARD] pointed region, If you are adding a rule, the result will be the difference between the vote (Struct ipt_table_info) of the FILTER table (Struct ipt_ip_ip_forward] and underflow [nf_ip_forward] expansion (for accommodating this rule), private-> number plus 1. 2.5.4 Rule application processes the rules to inject iptables, these rules are hooked at the corresponding HOOK entrances of their respective tables, and when the message flows to the Hook, a report that matches the rule Text, calling rules corresponding to Targets. Take the forwarded packet as an example, assume that the rule described above is added in the Filter table: reject all forwarding messages. As shown in section 1.2, the ip_forward () is handled by the local forwarded message after the routing will be handled, and the following code will be called before IP_forward (): NF_HOOK (PF_INET, NF_IP_FORWARD, SKB, SKB-> DEV, DEV2, ip_forward_finish)

NF_HOOK is such a macro (include / linux / netfilter.h):

#define nf_hook (PF, HOOK, SKB, INDEV, OUTDEV, OKFN) /

(List_empty (& NF_HOOKS [(PF)] [(hook)) /

(OKFN) (SKB) /

: NF_HOOK_SLOW ((PF), (SKB), (OUTDEV), (OKFN)))))

That is, if NF_HOOKS [PF_INET] [NF_IP_FORWARD] is empty (i.e., there is no hanging process function on the hook), the ip_forward_finish (SKB) is called directly; otherwise, NET / CORE is called /Netfilter.c::nf_hook_slow () transfer to the process of Netfilter. Here, an NF_HOOKS chain table 2D array is introduced:

Struct list_head nf_hook [nproto] [NF_MAX_HOOKS];

Each table that wants to use Netfilter is needed to register the table handler on the corresponding list of NF_HOOKS arrays. For the Filter table, when its initialization (NET / IPv4 / Netfilter / IPTable_filter.c :: init ()), Net / Core / Netfilter.c :: nf_register_hook () is called, and three Struct NF_HOOK_OPS will be predefined Structure (corresponding to INPUT, Forward, Output chain) Connected to the list: Struct NF_HOOK_OPS

{

Struct list_head list;

// Link list

NF_HOOKFN * HOOK;

// Process function pointer

INT PF;

// protocol number

Int hooknum;

// hook number

Int priority;

// Priority, each processing function in the NF_HOOKS chain is sorted by priority

}

For the Filter table, the HuWard point's hook is set to ipt_hook (), which will call ipt_do_table (). Almost all process functions will eventually call ipt_do_table () to query rules in the table to call the corresponding target. The following figure shows the process of calling NF_HOOK_SLOW () on the Forward point: Figure 7 Rule application flow 2.5.5 NetFilter's structural feature is visible, the NF_HOOKS Lin list is the contact message processing process and the iPtables, in IPTables initialization (The respective init () function), one hand calls nf_register_table () establishes rule containers, on the other hand, NF_REGISTER_HOOK () expresses its hook as a desire to the Netfilter framework. After the initialization is completed, the user only needs to operate the rule container (add rules, delete rules, modification rules, etc.) through the user-level iptables command, and the use of rules is completely unfosit. If there is no rule in a container, or there is no desire to express on nf_hooks, the message processing is performed as usual, and it is not affected by NetFilter-iptables; even if the message has passed the processing rule processing, it will return as usual. To the message processing process, therefore, from the macro look, just like a gas station in the driving process. Netfilter not only has this efficient design, but also has a lot of flexibility, which is primarily expandable in Netfilter-iptables, including Table, Match, Target, and Connection TRATICK Protocol Helper, the following section The content will be introduced. 3. Netfilter-iptables ExtensionsNetFilter provides a HOOK framework, and its advantage is to expand. The available Netfilter components mainly include Table, Match, Target, and Connection TRACK Protocol Helper, which correspond to four sets of extended functions, respectively. All expansions include the nuclear, intra-core, and the core is placed in / net / ipv4 / netfilter / under, the module is named pt_'name'.o; nuclear external part is placed in / extensions / Under the dynamic link library name, libipt_'name'.so. 3.1 TableTable has been introduced in the above chapters, which is used as a model stored, which determines when the rule can work. The filter provided by the system covers all HOOK points, so most applications can be carried out around these three existing tables, but also allow programperses to define their own table with special purpose, then need reference Struct IPT_TABLE defines the table in the table Create a new IPT_TABLE data structure, then call ipt_register_table () to register the new table and call ipt_register_hook () to associate the new table with the NetFilter Hook. There are not many situations where the table is expanded, so it is not detailed here. 3.2 Match & TargetMatch and Target are the most commonly used features in NetFilter-iptables, flexible use of Match and Target, which can complete the vast majority of packet processing functions. 3.2.1 Match data structure core with struct ipt_match characterizes a MATCH data structure: Struct ipt_match

{

struct list_head list; / * is usually initialized into {null, null}, by core use * /

Const char name [ipt_function_maxnamelen];

/ * Match's name, also requiring the module file containing the match is ipt_'name'.o * /

INT (* match) (const struct SK_Buff * SKB,

Const struct net_device * in,

Const struct net_device * out,

Const void * MatchInfo,

Int offset,

Const void * HDR,

U_INT16_T DATALEN,

INT * HOTDROP);

/ * Returns Non 0 means matching success, if it returns 0 and HOTDROP is set to 1, then the message should be discarded immediately * /

INT (* checken "(const char * TABLENAME,

Const struct ipt_ip * ip,

Void * MatchInfo,

Unsigned int matchInfosize,

Unsigned int hook_mask;

/ * In the rules of this match, the validity check is performed, if it returns 0, the rules will not join iptables * /

Void (* destroy) (Void * MatchInfo, Unsigned Int matchinfosize);

/ * Call when it is deleted from the table when the rule containing this match is removed, and the CheckenTry can be used for dynamic memory allocation and release * /

Struct Module * ME;

/ * Indicates if the current match is a module (NULL is not) * /

}

After defining a IPT_MATCH structure, you can call ipt_register_match () to register this match into the IPT_MATCH list, in the module mode, which is usually executed in the init_module (). 3.2.2 User-level settings for MATCH To use the core definition Match (including existing and custom), you must have explanation in the user-level iptables program, and the iptables source code also provides known core MATCH, but Unknown Match needs to add an instructions. In iptables, a match is represented by struct iptables_match:

Struct iptables_match

{

Struct iptables_match * next;

/ * Match chain, initial NULL * /

IPT_Chainlabel Name;

/ * Match name, and the core module is similar, as the naming rule of the iptables extension in the dynamic link inventory is libipt_'name'.so

(For IPv6 is libip6t_'name'.so),

To allow the IPTables main program to load the corresponding dynamic link library based on the Match name * /

Const char * VERSION;

/ * Version information, generally set to Netfilter_Version * /

SIZE_T SIZE;

/ * Match data size must be specified with ipt_align () macro designation * /

SIZE_T USERSPACESIZE;

/ * Since the kernel may modify some domains, SIZE may be different from the exact user data. At this time, the data that will not be changed in the front part of the data area,

Here, you should fill in the size of the changed data area; in general, this value is the same as Size * /

Void (* help); / * When IPTables requires the current Match information (such as iptables -m ip_ext -h), this function will call this function.

Output after the general information of the iptables program * /

Void (* init) (STRUCT IPT_ENTRY_MATCH * M, UNSIGNED INT * NFCACHE);

/ * Initialization, call before PARS * /

INT (* PARSE) (int C, char ** argv, int invert, unsigned int * flags,

Const struct ipt_entry * entry,

Unsigned int * nfcache,

STRUCT IPT_ENTRY_MATCH ** MATCH);

/ * Scan and receive the command line parameters of this match, return non-0 when receiving correctly, Flags is used to save status information * /

Void (* final_check) (unsigned int flags);

/ * When the command line parameters are all handled, it is called, if not correct, you should exit (exit_error ()) * /

Void (* print) (const struct ipt_ip * ip,

Const struct ipt_entry_match * match, int numeric);

/ * When the rule in the current table is queried, the additional information of the current Match rule is displayed * /

Void (* save) (const struct ipt_ip * ip,

Const struct ipt_entry_match * match;

/ * According to the format allowed by PARSE, output this match command line parameter to the standard output, used for iptables-save commands * /

Const struct option * extra_opts;

/ * NULL's parameter list, Struct Option is the same as Getopt (3) the same structure * /

/ * The following parameters are used inside the iptables, users do not care about * /

Unsigned int option_offset

STRUCT IPT_ENTRY_MATCH * M;

Unsigned int mflags;

Unsigned int use;

}

Struct Option {

Const char * name;

/ * Parameter name, used to match the command line input * /

INT has_arg;

/ * This parameter is allowed with parameters, 0 means no, 1 indicates that 2 indicates that there is no * /

INT * flag;

/ * Specify the returned parameter value content, if you are null, then return the following VAL value, otherwise return 0, the VAL is stored in the position pointed to the FLAG * /

Int Val;

/ * Default parameter value * /

}

For the --Opt parameter, it is defined in the Struct Option to {"OPT", 1, 0, '1'}, indicating the OPT band parameter, if the -Opt parameter appears, return ' 1 'For int C parameters in PARSE (). When actually use, each function can be empty, as long as the Name item is the same as the core corresponding Match name. After defining iptables_match, register_match () can be called to let the iptables body identify this new Match. When the IPTables command specifies the use of IP_ext, the iptables main program will automatically load libipt_ip_ext.so, and perform the _init () interface, so the register_match () operation should be put in _init () . 3.2.3 Target Data Structure Target Data Structure Struct IPT_TARGET and STRUCT IPT_MATCH basically, and the difference is just use Target function pointer instead of Match function pointer: struct pt_target

{

......

UNSIGNED INT (* Target) (Struct Sk_buff ** PSKB,

Unsigned int hooknum,

Const struct net_device * in,

Const struct net_device * out,

Const void * Targinfo,

Void * userdata);

/ * If you need to continue processing, return ipt_continue (-1), otherwise return NF_ACCEPT, NF_DROP equation, and its caller determines how to handle it processed according to its return value * /

......

}

Corresponding to ipt_register_match (), TARGET uses ipt_register_target () to register, but file naming, usage, etc. are the same as Match. 3.2.4 Target user-level setting Target User-level settings use the Struct iptables_target structure, which is identical to Struct iptables_match. Register_target () is used to register a new Target, and the method is also the same as MATCH. 3.3 Connection TRACK Protocol Helper Previous mention, NAT only processes only one connection (TCP or UDP) of a connection (TCP or UDP), followed by the Connection Track mechanism to complete the processing of subsequent packets. Connection track is a mechanism that can be used with NAT, which is used to process actions related to higher layer protocols in the transport layer (even application layer). About Connection Track, the implementation in Netfilter is more complicated, and the actual application frequency is not high, so it will not be expanded here. 3.4 IPTables Patch Mechanism For NetFilter-iptables extension, users can certainly modify the source code and compile the installation, but for normalization and simplicity, a set of PATCH mechanisms are provided in the iptables source package, and I hope that the user will expand according to its format, and You don't have to modify the kernel and iptables code separately. Adapted to the structural characteristics of Netfilter-iptables, which extensions to iptables also need to simultaneously modify the kernel and iptables program code, so PATCH is divided into two parts. In iptables-1.2.8, the nuclear patch is provided by the Patch-O-Matic package, and the extensions directory in the source code of iptables-1.2.8 is the IPTables program itself patch. Patch-o-matic provides a 'runme' script to the core Patch, according to its specification, the nuclear patch should include five parts, and naming a certain specification, for example, if the target name is ip_ext, then this five The file names and functions of the part are:

IP_ext.patch master file, the content is the core .c, .h source file patch, similar to the kernel Patch (patch -p0 /net/ipv4/netfilter/config.in file modification, the first line is a row in the original config.in to indicate the patch added, which is added after the above matches. The role of this patch is to support the newly added patch option in the core configuration interface; ip_ext.patch.configure.Help changes to /Documentation/configure.help, the first behavior in the original configure.help Help index, the following lines of content are added after this line. The role of this patch is to supplement the newly added option when supplement the kernel configuration; ip_ext.patch.help is used for RunME scripts to display this Patch help information; ip_ext.patch.makefile pairs / net / ipv4 / netfilter / Makefile modification, the same format of the first two files, used to add MAKE instructions used to generate ipt_ip_ext.o at the specified location. Examples can see the source files under Patch-O-Matic. The iptables itself is slightly simple, that is, add a libipt_ip_ext.c file in the extensions directory, and then add an IP_ext string in the PF_EXT_SLIB macro of this subdirectory. When installing, you can run the make point patches command in the root directory of iptables. This command will automatically call the RunME script, hit all the PATCH files under all PATCH-O-MATIC to the kernel, then reconfigure and compile Kernel. If you only need to install the required patch, you can run RunME IP_ext directly in the patch-o-matic directory, which will complete the installation of the IP_ext Patch. After that, still rehabilitate the kernel to make the patch take effect. Iptables itself's make / make install process can compile and install libipt_ip_ext.so, then the new iptables command can identify ip_ext target by loading libipt_ip_ext.so. Extensions can also define header files, usually use this header file core, so it is usually placed in the / include / linux / netfilter_ipv4 / directory, in the .c file specified the header file directory as Linux / Netfilter_IPv4 /. Flexibility is a major feature of the NetFilter-iptables mechanism, so extending NetFilter-iptables is also the key to its application. In order to adapt to this goal, NetFilter-iptables facilitate extension, and also provides a set of extended scenarios, and has a large number of extensions available for reference. 4. Case: The key to implementing the VPN virtual private network with Netfilter is the tunnel technology, which is to encapsulate packets through the public network. With the powerful processing capabilities of Netfilter-iptables for packets, you can implement a highly configurable VPN with minimal development costs.

The first part of this article describes the flow of mobile messages in IP Tunnel technology. It can be seen that there are two points in IP Tunnel technology: a special network device TUNL0 ~ TUNLX - Send, use the specified route method will need The encapsulated intranet packets are handed over to the network device. In the "NIC driver", the package is encapsulated, and then it is sent to the true network device as a normal IP packet; a special IP layer protocol IPIP - The package packets from the external network have a special protocol number (IPIP), and the packet is finally unseeding in the processing program of the protocol (IPIP_RCV ()), and then injects the IP protocol in the internal network IP header. The bottom layer (Netif_Rx ()) resumes the receiprification process. It is not difficult to see that after the message flows out of the tunlx device (ie, after the package is completed), it is necessary to pass the Output's Netfilter Hook point, and before the packet is universal (iPip_rcv (), you must pass Netfilter's Input Hook Point, therefore, it is entirely possible to make an article on these two hooks, complete the package and universal process. The reception process of the message can directly follow the iPip processing method, that is, customize a specialized protocol. The key to the problem is how to obtain outline packets that need to be packaged, thereby distinguishing between normal non-VPN packets. Our approach is to use NetFilter-iptables to use standard intra-network dedicated IP segments (such as 192.168.xxxxxxx) in the intranet to separate them through the IP address. The VPN configuration based on IP addresses is convenient for existing system management, and it is easy to upgrade after the future VPN system, and can combine the Netfilter-iptables firewall settings, which combine VPN and firewalls to jointly maintain a secure private network. In our solution, VPN uses a LAN-LAN mode (of course, the Dial-in mode has no difference in technology), set our VPN management component at the LAN gateway to form a security gateway. Nodes within the LAN can properly access non-sensitive external networks outside the firewall (such as the most sites of the Internet), but also through the security gateway screen, using VPN to access other private network LANs. Since this application is different from the original three tables in the function and the HOOK point, we have built a VPN table in the Filter table, and the VPN function is distributed in the following four parts:

Iptables Encrypt Target: For packets sent to the security subnet, IPRYPT TARGET processing, encrypts the original packet, generates the authentication code, and package the packet in the public IPIP_EXT packet head. Encrypt Target is configured on the Output and Forward Hook points of the VPN table, distinguishes whether the Encrypt Target encryption is required according to the destination IP address. IPIP_EXT Protocol: Instead of the IP address information of the security subnet in the IP address information of the security subnet, the IP address information of the security subnet is replaced by the IP address information of the public network, and then reinjects the bottom of the IP protocol stack. iptables iPip_ext match: The protocol identifier of the matching message is a custom IPIP_ext. The message after IPIP_EXT_RCV () will be the IPIP_EXT protocol type, otherwise it should be discarded. IPTables Decrypt Target: After receiving packets from the security subnet, after the IPIP_EXT protocol processed, restore the IP header to the IP header between the security subnet, and then enter the Decrypt Target processing, fully decrypt the message. seal. The flow of the entire message can be represented by the following figure: Figure 8 VPN packet flowing process For outgoing packets (from local or intranet), use the internal address to successfully match the forward / output point, execute Encrypt, return from Netfilter After the message as the local IPIP_EXT protocol continues to be sent out. For received packets, if the protocol number is ipProto_ipip_ext, match the Match of IPIP_ext, otherwise the INPUT point is discarded; the continued transmitted message is transmitted to the IPIP_ext protocol processing code reception, where to restore the intranet The Netif_RX () re-inflow the protocol stack after the message header of IP. The message at this time will match the rule in the Input / Forward point, and execute Decrypt, only through the Decrypt's packets can continue to be transferred to the upper layer protocol or intranet. Addition: iptables Settings Directive (Sample): iptables -t VPN -P Forward Drop

iptables -t vpn -a output -d 192.168.0.0.0/24 -j encrypt

iptables -t vpn -a input -s 192.168.0.0/24 -m ipip_ah -j decrypt

iptables -t vpn -a forward -s 192.168.0.0.0/24 -d 192.168.1.0 -j Decrypt

iptables -t vpn -a forward -s 192.168.1.0/24 -d 192.168.0.0/24 -j encrypt

Among them, 192.168.0.0.0/24 is the target subnet, 192.168.1.0 / 24 is the local net

Reference

[Linus Torvalds, 2003] Linux kernel source code V2.4.21 [Paul Russell, 2002] Linux Netfilter Hacking HOWTO V1.2 [Paul Russell, 2002] iptables source v1.2.1a [Paul Russell, 2000] LinuxWorld: San Jose August 2000, Netfilter Tutorial [Oskar Andreasson, 2001] iptables tutorial 1.0.9

转载请注明原文地址:https://www.9cbs.com/read-123069.html

New Post(0)