barcodeaddin.com

Repeated games and learning for packet forwarding in Java Creator qr barcode in Java Repeated games and learning for packet forwarding




How to generate, print barcode using .NET, Java sdk library control with example project source code free download:
Repeated games and learning for packet forwarding using barcode integration for none control to generate, create none image in none applications. Microsoft .NET Compact Framework directly. In the se none for none cond case, when node i participates in routes with one of the routes being of exactly two hops, and degin (i) 3, both the announcement from the source of the two-hop route and the aggregate announcements from the sources of the rest of the routes serve as the nal announcements. We note that the intersection of the aggregate announcements will incriminate a certain node.

The node that does not tell the truth can be determined by a majority-voting method. Finally, for the case in which node i participates in the routes which have more than two hops and degin (i) 4, the sources can form two groups and use the previous game rule. The lying node will be detected using majority voting.

In summary, any potential deviation in a network satisfying the conditions of Theorem 11.3.3 can be detected.

Moreover, the game rules guarantee that any feasible rational utilities can be enforced. We note that, from the announcement forwarder s perspective, it faces two scenarios, namely either the announcement contains negative information about the forwarder itself or it contains negative information about the other nodes. In the rst case, the forwarding node might not forward the announcement; however, even though that node itself does not forward the announcement, there is only a small probability that the announcement does not go through the whole network as illustrated in Figure 11.

3. Moreover, the condition that every node is monitored by at least two nodes indicates that the case illustrated is less probable. In the second case, the forwarding nodes do not have any immediate gain from not forwarding the announcement, i.

e., the forwarder is indifferent regarding forwarding the announcement. However, it is advantageous for the forwarding nodes to forward the truthful announcement in order to catch and punish the deviating node.

Otherwise, the forwarding nodes may also become victims of the deviation in the future. Moreover, the announcement consumes much less energy than does the packet transmission itself. Hence, by indifferent we meant that each node is better off making a truthful announcement, which will consume just a small portion of the energy of transmission, rather than facing a bigger loss arising from deviation by the deviating node.

The analyses for different information structures in Sections 11.3.1 and 11.

3.2 guarantee that any individually rational utilities can be enforced under some conditions. However, the individual distributed nodes need to know how to cooperate, i.

e., what the good packet-forwarding probabilities are. In the next section, we describe the learning algorithms which can be employed to achieve better utilities.

. S f a S b bypass f a f bypass f b Suppose that the vi none none ctim node, S, is at the edge of the network and every transmission coming from node S should go through node f. Suppose that node f deviates and blocks the announcement from S. Node S can increase its transmission power to bypass node f to broadcast the announcement.

. 11.4 Self-learning algorithms Self-learning algorithms From Section 11.3, any Pareto-dominant solutions better than one-stage NE can be sustained. However, the analysis does not explicitly determine which cooperation point is to be sustained.

In fact, the system can be optimized with respect to different cooperating points, depending on the system designer s choices. For instance, the system can be designed to maximize the weighted sum of the average in nitely repeated game s utilities as follows:. U sys = w(i)U ,. w(i) = 1.. (11.24). In particular, when none none w(i) = 1/N , i, maximization of the average utility per node is usually employed in network optimization: U sys = 1 N. N i=1 (11.25). We use (11.25) as a n example, but we emphasize that any system objective function can be incorporated into the learning algorithm in a similar way. From an individual point of view, as long as cooperation can generate a better utility than non-cooperation, the autonomous node will participate.

Moreover, any optimization other than the system optimization can be monitored by the other nodes as deviation. Consequently, punishment can be explored in the future. The basic idea of the learning algorithm is to search iteratively for the good cooperating forwarding probability.

Similarly to the punishment design, we consider the learning schemes for different types of information availability, namely perfect observability and local observability. In parallel with the system model in Section 11.2, we consider timeslotted transmission that interleaves the learning mode and the cooperation-maintenance mode as shown in Figure 11.

1. In the learning mode, the nodes search for better cooperating points. In the cooperation-maintenance mode, nodes monitor the actions of other nodes and apply punishment if there is any deviation.

In the learning mode, the nodes have no incentive to deviate since they do not know whether they can get bene ts. So they do not want to miss the chance of obtaining better utilities in the learning mode. It is also worth mentioning that, if a node deviates just before a learning period, it will still be punished in the following cooperation-maintenance period.

So the assumption of an in nitely repeated game is still valid for this time-slotted transmission system..
Copyright © barcodeaddin.com . All rights reserved.