Monday, January 30, 2006

Initial Survey of VANET Literature

Initial Survey of VANET papers
Nathan Balon

A review of papers on vehicular ad hoc networks, identifying the emphasis of each paper:

  1. What problem is addressed?
  2. What solution is proposed?
  3. How do the solutions differ from previous solutions?
  4. What are the main contributions and conclusions?

The Broadcast Storm Problem in Mobile Ad Hoc Network, by S-Y. Ni, Y-C Tseng, Y-S. Chen, and J-P Sheu

  1. The problem addressed is the using flooding to propagate a broadcast message throughout a network. The “broadcast storm problem” refers to the problem associated with flooding. First flooding results in a large number of duplicate packets being sent in the network. Second, a high amount of contention will take place, because nodes in close proximate of each other will try to rebroadcast the message. Third, collisions are likely to occur because the RTS/CTS are not applicable for broadcast messages.
  2. There are two ways to limit the problems caused by a broadcast storm. First, an approach can be taken that reduces the possibility of redundant rebroadcasts. Second, the time of rebroadcasts can be differentiated. The paper proposes the five schemes: probabilistic, counter-based, distance-based, location-based, and cluster-based schemes.
  3. Most others try to solve the problem by assigning time slots for the transmission of a broadcast message. The problem with assigning time slots is global synchronization is difficult to achieve in ad hoc networks.
  4. The simulations showed that a simple counter based implementation can eliminate a large number of redundant broadcasts in a dense network. The location-based scheme was found to be the best solution and the results showed that it worked well for a wide range of host distributions. The only drawback to the location-based scheme is a device such as a GPS receiver is needed.

Urban Multi-Hop Broadcast Protocol for Inter-Vehicle Communication Systems, by G. Korkmaz, E. Ekici, F. Ozguner, and U. Ozguner

  1. The problem addressed in the paper is the multi-hop broadcast of information in a Inter-Vehicle Communication (IVC) System. The methodology addresses the hidden node problem, the broadcast storm problem, and reliability of multi-hop broadcasting.
  2. The authors suggest creating an extension to IEEE 802.11. The problem is addressed by locating the furthest node in a direction without prior topology information. The furthest node is then responsible for forwarding the message and receiving an acknowledgment of the successful broadcast. The protocol also suggests the use of repeaters at intersection to eliminate the problems caused by the shadowing large buildings in an urban environment.
  3. Other protocols improve on blind flooding, but most of these methods are not effective for all node densities and packet loads. A number of solutions that have been proposed take a proactive approach to address these problems, but these solutions are not acceptable in highly mobile environments.
  4. Since the protocol obeys the rules defined by the 802.11 standard it can be used with other nodes that do not use this broadcast protocol. The simulations showed a high success rate when the network has a high packet load and a dense vehicle distribution.

Smart Broadcast Algorithm for Inter-Vehicle Communications, by E. Fasolo, R. Furiato, and A. Zanella

  1. The problem addressed is developing a broadcast protocol that provides high reliability and low propagation delay.
  2. The paper proposes a distributed, position aware “Smart Broadcast” algorithm. Each node that receive a broadcast forwards the packet after a random backoff that is determine based on the nodes position from the source. The algorithm makes use of GPS to speed up the propagation of a message.
  3. Little attention has been applied in designing efficient and reliable broadcast propagation algorithms.
  4. The simulation showed the algorithm performed well, approaching the performance bound of the MCDS-based solutions. A problem with the algorithm is the difficulty of setting some of the parameters used by the algorithm such as the contention window size.

Double-Covered Broadcast (DCB): A Simple Reliable Broadcast Algorithm in MANETs, by W. Lou and J. Wu

  1. The paper looks at ways to reduce broadcast redundancy in an environment that has a high error rate of transmission. Forwarding nodes form a connected dominating set; finding the minimum connected dominating set has been shown to be an NP-complete problem. The paper also addresses the issue of acknowledging a broadcast. The problem that exists is if all nodes were to send acknowledgments on the successful receipt of a broadcast packet the “ACK explosion problem” would occur.
  2. The double-covered broadcast (DCB) algorithm uses broadcast redundancy to improve the delivery ratio of broadcast message in an environment with a high error ratio. The algorithm works by only specified nodes in the sender’s 1-hop range forwarding the message. Forwarding nodes are selected that meet the following requirements the senders 2-hop neighbor set is fully covered the sender's 1-hop neighbors are either forward nodes or non-forward nodes but covered by at least two forwarding neighbors, the sender itself and one of the selected forward nodes. The retransmission by the forwarding nodes signals an acknowledgment of a message. If the sender does not receive the implicit acknowledgment the sender retransmits the broadcast.
  3. This process differs from other solutions in that set of forwarding nodes is selected from among the 1-hop neighboring nodes.
  4. The DCB algorithm is sensitive to node mobility. When the nodes in the network were highly mobile the performance of the algorithm dropped significantly, this algorithm may not be suitable for vehicular ad hoc networks for this reason. The reason is the broadcasting node needs to maintain a set of neighbors.

Selecting Forwarding Neighbors in Wireless Ad Hoc Networks, by G. Calinescy, I. Mandiou, P-J. Wan, and A. Zelikovskya

  1. The problem addressed in the paper is finding a forwarding set of the minimum size. Minimum Forwarding Set Problem: Given a source A, let D and P be the sets of 1 and 2-hop neighbors of A. Find a minimum-size subset F of D such that every node in P is within the coverage area of at least one node from F.
  2. Redundancy can be reduced by finding the minimum number of 1-hop neighbors that cover all 2-hop neighbors.
  3. The authors propose two algorithms for finding the minimum forwarding set. The first algorithm is a geometric O(n log n) factor-6 approximation algorithm. The second algorithm is a combinatorial O(n^2) factor-3 approximation algorithm. The algorithm improves on the previously best known algorithm by Bronnimann and Goodwhich which guarantees O(1) approximation in O(n^3 log n) time.
  4. The authors present a new algorithm to find the smallest subset of neighbors that cover all 2-hop neighbors.

ADHOC MAC: New MAC Architecture for Ad Hoc Networks Providing Efficient and Reliable Point-to-Point and Broadcast Services, by F. Borgonovo, A. Capone, M. Cesana and L. Fratta

  1. The paper addresses the problem of reliable broadcast in wireless ad hoc networks. A reliable broadcast method is essential for exchanging location information in VANETs and for an adress resolution protocol (ARP). The hidden terminal problem and exposed node problem make it difficult to provide reliable broadcasts in wireless networks. This paper introduces a new protocol Reliable R-ALHOA (RR-ALHOA). The problem caused by the traditional flooding approach is if a message is broadcast to n nodes, than there will be n rebroadcasts of the message.
  2. The solution to the problem is a new MAC protocol called RR-ALHOA, which implements a dynamic time division multiple access (TDMA) mechanism that allows for nodes to reserve resources. The protocol uses a Basic Channel (BCH) that provides nodes with the knowledge of transmissions in overlapping segments. The information on the Basic Channel overcomes the problem of hidden terminals and exposed nodes by allowing nodes to have information of all nodes in its 2-hop range.
  3. The RR-AHLOA protocol is an extension to the R-ALHOA protocol.
  4. The results of the simulations run by the authors showed that the average channel setup delay was a few hundred ms.

MAC for Ad Hoc Inter-Vehicle Network: Services and Performance, by F. Borgonovo, L. Campelli, M. Cesana, and L. Coletti

  1. The paper analyzes the performance of the ADHOC MAC which is a protocol that was developed for the CarTALK 2000 project. ADHOC MAC provides a distributed reservation protocol to dynamically establish a reliable single hop broadcast channel (BCH).
  2. The paper does not provide any new solutions. The purpose of the paper is to analyze the performance of the broadcast channel.
  3. The solution doesn't differ from previous methods, the goal of the paper is to analyze the ADHOC MAC. protocol
  4. The main contribution of the paper is the analysis of the performance of 1-hop and multi-hop broadcast messages. The results showed that ADHOC MAC provided high performance in terms of access delay and radio resource reuse.

A Decentralized Location Based Channel Access Protocol for Inter-Vehicle Communication, by S. Katragadda, G. Murthy, R. Rao M. Kumar and S. R

  1. The paper addresses the problem of channel reuse in a vehicular ad hoc network. The efficient allocation of channels in an ad hoc network allows the network to become more scalable.
  2. The authors propose the Location Channel Access (LCA) protocol to assign a channel to a node in the network based on the nodes geographic position. Nodes are assigned a channel with a distributed algorithm based on the information that is collected from geo-location system such as GPS.
  3. A channel assignment method similar to the one used by cellular systems is applied to mobile ad hoc networks.
  4. The paper proposes a protocol to assign channels dynamically to a vehicle in a hoc network without the use of a central controller based on a vehicles location in the network.

Probabilistic Broadcast for Flooding in Wireless Mobile Ad Hoc Networks, by Y. Sasson, D. Cavin, and A. Schiper

  1. The problem addressed is improving the efficiency of a flooding algorithm, by using a probabilistic broadcast method.
  2. The authors use a probabilistic algorithm for relaying a broadcast message, where a node has probability p of rebroadcasting a message and 1 – p probability of taking no action in the rebroadcast of the message. The paper explores the possibility of applying phase transitions for selecting the probability of rebroadcasting a message. Phase transition is a well known phenomenon from percolation theory and random graphs.
  3. Many other studies try to optimize flooding in ad hoc networks by using a deterministic approach. This paper explores if a probabilistic approach which may be more suitable to ad hoc networks since they are highly dynamic.
  4. The study showed there is a difference between the ideal behavior and the results of the actual simulations. The authors found that probabilistic flooding does not exhibit a bimodal behavior as percolation and graph theory would suggest. Probabilistic flooding did increase the success rate of transmissions when network is densely populated. Some possible future work is to determine an algorithm to dynamically adjust the p probability. Another area of future research is to understand the effect that modifying the transmission range with regard to p.

A Differentiated Distributed Coordination Function MAC Protocol for Cluster-based Wireless Ad Hoc Networks, by L. Bononi, D. Blasi, and S. Rotolo

  1. The problem addressed in the paper is providing QoS for cluster-based networks. Clustering protocols produced a hierarchical network. The result of creating a hierarchy is it simplifies routing in a network since packets have to only be routed to a certain cluster. The problem the authors consider is assigning a service class to a node based on the role that the node performs in the cluster.
  2. The authors propose the Differentiated Distributed Coordination Function (DDCF) to support a distributed Mac and cluster scheme. DDCF is similar to IEEE 802.11e. The difference between the two is IEEE 802.11e differentiates the service on a per flow basis, while DDCF differentiates the service based on the role of a node.
  3. Most approaches to QoS assign a priority on a per flow basis. The authors suggest assigning a priority based on the role that a node plays in the network. For example a node that is elected as the head of a cluster would have a higher priority that the other nodes in the cluster. Also, nodes that are responsible for routing messages between clusters are given a higher priority.
  4. The authors show through simulations that DDCF is an effective distributed differentiation scheme. One assumption made is that the cluster head will be chosen based factors such as mobility. In a vehicular environment is likely that all vehicles will be highly mobile so this solution may not apply.

An Adaptive Strategy for Maximizing Throughput in MAC Layer Wireless Multicast, by p. Chaporkar, A. Bhat, and S. Sarkar

  1. The problem addressed is providing multicast support at the MAC layer for wireless ad hoc networks. A problem with multicasting is that as the throughput increases the stability of the network decreases, so there is a trade of between stability and throughput.
  2. The authors designed a policy that determines when a sender should transmit. The goal of the policy is to maximize throughput and at the same time maintain the stability of the network. A MAC protocol was design that acquires the local information in order to execute the policy. One problem that is addressed is transmission in wireless network is essentially a broadcast, but all nodes in the sender region may not be able to receive a message sent from the sender because of they currently engaged in communication with other nodes in the network. A policy can be used to determine if the transmission should occur immediately or should the transmission be postponed till more nodes will be able to receive the message. The transmission policy is based on the queue length of the sender and the number of available receivers.
  3. Most of the previous research in wireless multicasting was concerned with the transport and network layers.
  4. The simulations performed show that the authors approach outperform the exist methods used in wireless multicasting. Some open problems are (a) coordinating the transmissions from different nodes to maximize performance (b) interaction between the proposed protocol and the transport layer and (c) optimizing the performance in presence of mobility.

Broadcast Reception Rates and Effects of Priority Access in 802.11-Based Vehicular Ad Hoc Networks, by M. Torrent-Monero, D. Jiang, and H. Hartenstein

  1. The main problem that is addressed in the paper is how well broadcast performance scales. The authors study three areas: what is the probability that a broadcast message is received based on the intended receivers distance from the sender, how can the reception rate be improved for emergency warnings and the effect of using different models in the simulation.
  2. The authors propose use a priority access mechanism based on 802.11e to give access to nodes with a high priority such as a vehicle transmitting an emergency warning. In this scenario the probability of reception is measured using a two-way ground model and the Nakagami model.
  3. The authors work differed from some of the previous work done in the area in that a non-deterministic propagation model was used for the simulations.
  4. Using the two-way ground model large gains were achieved in the reception rates of emergency messages. Using the Nakagami model the authors found the reception rates were much worse. Multi-hop relaying and retransmission strategies may be used to increase the reception rate of emergency messages. More realistic radio models are need for the simulation of wireless ad hoc networks. Topology control mechanisms need to be created for non-deterministic models, such as varying the power level of a node. More insight into the why the results were much worse with non-deterministic model are needed, such as determining if radio power fluctuation or node mobility were the main contributor to the poor results.

Vehicle-to-Vehicle Safety Messaging in DSRC, by Q. Xu, T. Mak, and R. Sengupta

  1. The design of a MAC protocol to send vehicle to vehicle safety messages is investigated. The paper address the need to give safety messages a higher priority than non-safety messages.
  2. MAC protocol is developed that based on 802.11a which allows messages to be prioritized. The protocol was then simulated using the Friis and two-ray models.
  3. The paper addressed the need of creating a protocol to send safety messages in DSRC with varying priorities.
  4. The authors found that the protocol will should be feasible if network designers and safety application designers work together. In 200 ms a vehicle should be able to collect information from 140 vehicles in its surrounding. The average time it takes for a driver to react to an accident is 0.7 seconds. The area of intersection communication is an area of future work. Additional adaptive control at the MAC and physical layer is needed. Also, further characterization of the classes of messages is needed.

The Challenges of Robust Inter-Vehicle Communications, by M. Torrent-Moreno, M. Killat, and H. Hartenstein

  1. The paper addresses the adverse channel conditions in ad hoc vehicular networks. Some of the problem associated with VANETs is the received signal strength fluctuation, high channel load, and high mobility. The hidden terminal problem is the main cause of poor performance in vehicular ad hoc networks.
  2. The solution that is proposed is to use more realistic models to design vehicular ad hoc networks. A probabilistic model should be used when design communication protocols for vehicular ad hoc networks.
  3. The paper addresses the challenges of using different types of messages, such as broadcast messages, event driven messages, and the problem associated with bidirectional links.
  4. More realistic models need to be developed to properly model communication protocols.

A Vehicle-to-Vehicle Communication Protocol for Cooperative Collision Warning, by X. Yang, J. Liu, F. Zhao, and N. Vaidya

  1. A communication protocol for cooperative collisions warning messages is proposed in the paper. A major channel in the construction of an emergency warning protocol is ensuring the timely delivery of messages.
  2. The authors of the paper propose the Vehicular Collision Warning Communication (VCWC) protocol. The emergency warning protocol uses congestion control policies, service differentiation mechanisms and methods for emergency message dissemination. Congestion control is achieved by using a rate adjustment algorithm. The goal of the protocol is to develop a communication protocol that does not require too much overhead.
  3. A protocol that can be used to disseminate emergency warning and reduce the amount of congestion in the network is the aim of the proposed solution.
  4. The authors concluded that their proposed solution allows for low latency of emergency warning messages. The protocol also reduces the number of redundant emergency warning messages.

Fair Sharing of Bandwidth in VANETs, by M. Torrent-Monero, P. Santi, and H. Hartenstein

  1. A problem with wireless networks is there is limited amount of bandwidth. There arFair Sharing of Bandwidth in VANETs two types of safety messages in VANETs. First, periodic messages alert other vehicles in the area of the vehicles state. Second, emergency warnings are triggered by a non-safe driving condition. When the number of nodes sending periodic broadcasts is too large, because of high vehicle density, emergency warning messages will have take a greater amount of time to be received.
  2. The authors of the paper propose the Fair Power Adjustment Algorithm (FPAV), which is a power control algorithm that finds the optimum transmission range of a node. The problem is presented in terms of a max-min optimization problem. When an emergency condition arises the periodic messages should be limited. First, the FPAV algorithm maximizes the minimum transmission range of all nodes using a synchronized approach. Second, the algorithm maximizes the transmission range of all nodes individually while keep the network under a certain load.
  3. The paper addressed the issue of broadcasting safety messages in a densely populated network.
  4. The goal for future work of the algorithm is implement the algorithm fully distributed, asynchronous, and localized.

MIT Computer Science Lectures on Video

I read a posting on Digg the other day that discussed the relevance of the languages used in computer science curriculmums. The author concludes that most schools are dumbing down their computer science programs, by using Java. Since Java does not have pointers the students aren't learning the material as well. To implement almost any data structure pointers need to be used. Although Java doesn't explicitly have pointers, the idea behind using a reference to an object in Java are almost identical to the use pointers in C++. He also mentions that many students are not taught recursion. The implementation of most non-trivial algorithms requires the use of recursion, for instance, to traverse a tree the easiest way to do it is with a recursive algorithm. I find it hard to believe that students are graduating from universities without knowing these topics. From my experience in college students are introduced to a number of different languages. In college I have used C, C++, Java, Perl, to name a few.

Another arguement that was stated is the current generation of computer science students are not being taught some of the important topics that were taught in the past. I following some of the links in the article I came across a video lectures series from MIT that was produced in the 80's. The lectures are given by Hal Abelson and Gerald Jay Sussman and also authors of the book Structure and Interpretation of Computer Programs. The lectures seem like they are very interesting. I'm going to try to start watching these the next few days. I have always wanted to learn Lisp and the first lecture in the series is on Lisp. I also would like to see the difference that 20 years makes in the teaching of computer science. I would like to guess the material in these lectures is as relevent today as it was when the lectures were first given. The languages that are used to teach the classes today may be different, but the theory is the same.

Wednesday, January 25, 2006

Introduction to Vehicular Ad Hoc Networks and Media Access Control

Intro

The recent adoption of the various 802.11 wireless standards has caused a dramatic increase in the number of wireless data networks. Today, wireless LANs are highly deployed and the cost for wireless equipment is continuing to drop in price. Currently, an 802.11 adapter or access point (AP) can be purchased for next to nothing. As a result of the high acceptance of the 802.11 standards, academia and the commercial sector are looking for other applicable solutions for these wireless technologies. Mobile ad hoc networks (MANET) is one area that has recently received considerable attention. One promising application of mobile ad hoc networks is the development of vehicular ad hoc networks (VANET).

A MANET is a self forming network, that can function without the need of any centralized control. Each node in an ad hoc network acts as both a data terminal and a router. The nodes in the network then uses the wireless medium to communicate with other nodes in their radio range. A VANET is effectively a subset of MANETs. The benefit of using ad hoc networks is it is possible to deploy these networks in areas where it isn't feasible to install the needed infrastructure. It would be expensive and unrealistic to install 802.11 access points to cover all of the roads in the United States. Another benefit of ad hoc networks is they can be quickly deployed with no administrator involvement. The administration of a large scale vehicular network would be a difficult task. These reasons contribute to the ad hoc networks being applied to vehicular environments.

Traffic fatalities are one of the leading causes of death in the United States. The Federal Communications Commission (FCC), realizing the problem of traffic fatalities in the US dedicated 75 MHz of the frequency spectrum in the range 5.850 to 5.925 GHz to be used for vehicle-to-vehicle and vehicle-to-roadside communication. The 5.9 GHz spectrum was termed Dedicated Short Range Communication (DSRC) and is based on a variant of 802.11a. Seven channels of 10 MHz each make up DSRC, with six of the channels being used for services and one channel for control. The goal of the project is to enable the driver of a vehicle to receive information about their surrounding environment. The control channel is used to broadcast safety messages e.g. to alert the driver of potentially hazardous road conditions. The control channel is also used to announce the services that are available. If vehicle finds a service of interest on the control channel, it then switches to one of the service channels to use the service. A number of additional value added features are to be provided by the service channels such as the announcement of places of interest in the drivers locations e.g. restaurants in the area or gas prices.

The creation of Vehicular Ad Hoc Networks (VANET) has also spawn much interest in much of the rest of the world, in German there is the FleetNet project and in Japan the ITS project. Vehicular ad hoc networks are also known under a number of different terms such as inter-vehicle communication (IVC), Dedicated Short Range Communication (DSRC) and WAVE. The goal of most of these projects is to create new network algorithms or modify the existing protocols to be used in a vehicular environment.

Challenges Creating Ad Hoc Networks

There are many challenges that need to be addressed when creating an vehicular ad hoc network. One of the challenges facing ad hoc networks is the topology of the network changes rapidly. Vehicles in a VANET have a high degree of mobility. The average length of time that two vehicles are in direct communication range with each other is approximately one minute. Another obstacle restricting the wide spread adoption of ad hoc networks is many of the protocols used for 802.11 are centralized and new distributed algorithms must be developed. Many of the algorithms that were acceptable for 802.11 relied on the fact that there was a centralized controller the AP. The 802.11 standard does provides limited support for ad hoc mode with the independent basic service set (IBSS) configuration, but it is not sufficient for vehicular ad hoc networks. Furthermore, wireless communication is unreliable. The error rate in wireless networks is much higher than on an Ethernet. All of these issues makes implementing a VANET difficult.

Media Access Control

To create wide-scale ad hoc networks, changes need to be made to the media access control (MAC) layer. The objective of media access control layer is to arbitrate the access to the shared medium, which in this case is the wireless channel. If no method is used to coordinate the transmission of data, than large number of collisions would occur and the data sent would be lost. The ideal scenario is an MAC that prevents no two nodes within transmission range of each other from transmitting at the same time.

The 802.11 family of protocols use CSMA/CD with acknowledgments to restrict the number of collisions in a network. The 802.11 standard defines two MAC protocols. The Distributed Coordination Function is a contention based access protocol. In a contention based protocol all nodes that have data to send contend for the channel. Contention based protocols are the easiest to implement but the problem with them is they offer no quality of service (QoS) guarantees. Contention free protocols are achieved by scheduling when a node can transmit. Contention free protocols enable the use of real-time services. The Point Coordination Function is a contention free protocol but is not applicable to ad hoc networks because it relies on central node to support the real-time delivery of packets.

One of the main problems effecting the reliability of the DCF is the problem know as the hidden terminal problem. The hidden terminal problem is the main cause of collisions in a wireless network. The hidden terminal problem occurs when there are two nodes that are outside the transmission range of each other but will each transmit to a node that is shared between them. In figure 1 below, nodes S1 and S2 can not sense each others transmissions. If both S1 and S2 were to transmit to R1 and the same time a collision would occur.


Figure 1. Hidden Terminal Problem

The 802.11 protocol address this problem by adding an optional RTS/CTS transmission before the actual data is transferred. Figure 2 below shows the RTS/CTS exchange before the transmission of data. When S2 hears the CTS from R1 it will defer its transmission.


Figure 2. RTS/CTS Exchange

The wireless bandwidth is a scarce resource, so the MAC should make efficient use of it. MAC protocols can be contention based or contention free. The RTS/CTS helps to eliminate the problem of hidden terminals and in turn making a better use of the bandwidth.

Broadcast Messages

A number of challenges exist in providing reliable broadcasts. In vehicular ad hoc networks a majority of the messages that are transmitted will be periodic broadcast messages that announce the state of a vehicle to it neighbors. It is likely that there will be more broadcast messages than unicast messages in vehicular network. In this case, the RTS/CTS exchange can not be used because it would flood the network with traffic. Also, it isn't practical to receive acknowledgments from all of the nodes that receive a broadcast message. The problem of not using RTS/CTS exchange is the network exhibits the hidden terminal problem as discussed above.

A number of different approaches can be taken to broadcast a message to each node in an ad hoc network.

  • Flooding

  • Probabilistic Broadcast

  • Counter-Based Broadcast

  • Location-Based Broadcast

  • Cluster-Based Broadcast

Flooding is the easiest method to implement, but also suffers from the most problems. The flooding algorithm works by each node in the network that receives a broadcast message for the first time rebroadcasts the message. The use of flooding results in the “broadcast storm problem”. The problem can be characterized by redundant rebroadcasts, contention, and collisions. First, when each node rebroadcast a message it is highly likely that the neighboring nodes have already received the broadcast which results in the flooding algorithm creating a large number of redundant messages. Second, since all nodes in the area are trying to rebroadcast the message there will be a significant number of nodes contending for access to the wireless channel. Third, a high number of collisions will occur without the use of the RTS/CTS exchange.

Conclusion

As a result of the “broadcast storm problem” more efficient methods of broadcasting need to be used. In the paper, “The Broadcast Storm Problem in a Mobile Ad Hoc Network” the authors describe four possible solutions to provide more efficient broadcasts in an ad hoc network. A number of other papers have used built on the ideas discussed in this paper. Providing reliable broadcasts in an ad hoc network is still an open issue of research.

References

H. Alshear and E. Horlait. “An Optimized Adaptive Broadcast Scheme for Inter-Vehicle Communication,” in Proc. IEEE Vehicular Technology Conference( IEEE VTC2005-Spring),Stockholm,Sweden,May 2005.

J. Blum, A. Eskandarian,and L. Hoffman. “Challenges of Intervehicle Ad Hoc Networks,” IEEE Transactions of Intellegent Transportation Systems, Vol. 5, No, 4, December 2004.

S-Y. Ni, Y-C. Tseng, Y-S. Chen, and J-P. Sheu. “The Broadcast Storm Problem in a Mobile Ad Hoc Network,” in Proc. ACM/IEEE MobiComm, 1999.

M. Torrent-Moreno, M. Killat, and H. Hartenstein. “The Challenges of Robust Inter-Vehicle Communications,” in Proc. 62nd IEEE Semiannual Vehicular Technology Conference, Dallas, Texas, September 2005.

Q. Xu, T. Mak, and R. Sengupta. “Vehicle-to-Vehicle Saftey Messaging in DSRC,” in Proc. ACM VANET, Philadelphia, October 2004.

A Guide to Starting Network Related Research

When I started doing my research project, I was having a problem with narrowing the scope of my project. My research advisor sent me a link research-start.html, which explains how to start a research project. This page should be helpful to anyone else who is looking to start a research project in a network related field,

Friday, January 20, 2006

Speeches Given by RMS

Richard Stallman is the founder of the Free Software Foundation and has paved the way for open source software. If it wasn't for Stallman there is a high probability that Linux would not exist in its present form. The majority of the programs that are built around the Linux kernel are a result of the work done by Stallman's GNU project.

One thing that can be said about Stallman is he is always controversial and thought provoking. While it is doubtful that anyone will agree with all of Stallman's theories, the speeches he gives are very engaging. I stumbled across a page on the GNU website that contains a number of speeches given by Richard Stallman. The speeches are in either audio or video format.

Thursday, January 19, 2006

Java Application Development on Linux




I completed reading the book "Java Application Development on Linux" by Carl Albing and Michael Schwarz. The book is very worth while to read. There are a number of quality Java books available that explain the Java language such as "The Java Programming Language" and "Core Java" to name a few. What differentiates this book is it focuses on the open source tools that assist in the implementation of a Java program. All of the information in the book can be acquired by reading tutorials on the Web, but the nice part is this book collects all of this information in one place. Another nice thing about the book is it can be freely downloaded from the publishers web site.

Some of the topics the authors discuss in the book are: the different software development kits available on Linux, open source IDEs, CVS, JUnit, Ant, SWT, Swing, JDBC and EJB. I have used Java extensively for projects at school, so I was familiar with many of these topics, but I have never actually used some of the tools discussed in the book. The two topics I found most helpful were the overviews of CVS and JUnit. Before I read the book, I had basic idea how these tools worked but now I am much more likely to use them. CVS makes it possible to revert back to a prior version of source code without much effort. The next program that I write I will definitely use CVS.

The book is a good starting point in getting acquianted with open source tools to use in Java development. The only draw back is that because of space constraints, the authors are not able to go into that much detail on some of the subjects. The book does a good job in pointing the reader in the correct direction for additional information.

Wednesday, January 18, 2006

DRM and the GPL

I recently listened to the January 6, 2006 of the Gilmore Gang. On this episode, the gang has a thought provoking discussion on digital rights management (DRM). One prediction that the gang makes for the upcoming year is “it will not be the year of the Linux desktop, but the year of DRM”.

The problem that I see with using DRM technology is that various companies are taking different approaches to implementing digital rights management. Media that enforces a DRM mechanism such as that is obtained from iTunes music store will not play in Windows Media Player, XMMS, or Xine. The consumer is also restricted from playing videos in Microsoft's wmv format on Linux, if they include DRM. To some people this is not a problem, but is excludes a number of users who don't own an OS that supports programs such as iTunes or Windows Media Player. Neither Apple or Microsoft show any sign of supporting DRM on competing platforms.

The industry needs to focus on creating a standard protocol. In the field of encryption, any good encryption algorithm is made public and tested by peer review. The key players that are offering protected media should take the same approach. Each company that is offering media that enforces DRM is making their product so it is not inter-operable with it competitors in a hope to gain market share and control the distribution of media. It is probably not be in the best interests of the parties involved to create a standard DRM protocol. The lack of standardization of DRM in the end hurts the consumer.

Another thing that really startled me is when I learned that Intel has implemented DRM technology in its new line of chips. I'm not really sure if this is a good thing or a bad thing. I would make a guess that it is most likely a bad thing for consumers. The only benefit of adding DRM at the hardware level is it may allow media that is protected by DRM to become standardized, in theory DRM media would be able to be play on Linux. In this case, user would effectively have the option to use material protected by DRM. Currently any media that is protected by the DRM can not be played on a box running Linux.

I don't really see how adding DRM functionality to Intel's chips benefits their future growth. Intel must have decided to add this feature at the chip level, as a result from pressure put on them by the MPAA and RIAA. As a consumer, I would buy an alternative product that does not include this feature. I'm not really sure why someone would buy a chip that restricts their use, when there are other alternatives. On the other hand, a lot users may not care or may not informed of the chips having DRM.

As for AMD, I would like to see what their response is to Intel adding DRM at the hardware level. I'm a long time user of AMD, since the K6 chip line, I really hope that they don't follow the path taken by Intel. If AMD doesn't include the DRM technology, I think a lot of Intel users would switch over to AMD. If AMD decides to include a DRM implementation, some users would decide to abandon the x86 platform altogether and switch to another platform. One possible solution would be to use Suns SPARC architecture. Sun recently made the SPARC architecture open source. Another possible solution is using IBM's Power PC architecture. IBM is a big supporter of Linux, so in the future I would not be surprised to see IBM heavily marketing Linux on Power PCs. Linux is currently available for both of these architectures, so these seem like natural solutions.

The FSF has taken a strong stance against digital rights management in the GPLv3 draft. Any software published under the new version of the GPL license, must not include any DRM restrictions. With the adoption of GLPv3, it will basically rules out using any media protected with DRM on Linux, since almost all software on Linux is published under the GPL.

There are two different business models emerging on the Internet. First, there are those who want to get paid each time something is viewed or listened to. For instance, I read an article awhile back where Madonna wanted to get paid any time that someone searched for her in a search engine. Second, there is the other side, where companies generate revenue from advertisement such as Google and provide content for free. The BBC seems to be adapting this model with the announcement of the Open News Archive. Another example which goes against the traditional outlets are works that are published under the Creative Commons license.

While many are trying to make there content open and accessible to the public there are other corporations such as Sony. It was discovered back in November of 2005, that Sony music CDs install a rootkit on the owners computer if they insert the disc into their computer to listen to it. I am really surprised by the public's reaction and court settlement concerning the Sony rootkit. If I happened to install a rootkit on one of Sony's computers I would go to prison. In the case of Sony, they end up giving the customer some free songs as a result of invading their customers computers. If you are going to hold an individual accountable for their actions a corporation such as Sony should also be held responsible. The RIAA even backs the use of the rootkit by Sony. It seems like Sony got off easy. In the end, Sony's use of the rootkit will cause me to never purchase another of their products.

The next year or two will definitely be very interesting in the development of digital rights. There are two opposing sides, those in favor and those strongly against DRM. On the on side there are companies and organizations such as Microsoft, Apple, Sony, RIAA, MPAA and Intel that pushing to include DRM. On the other side there are companies that support open-source such as the FSF, the Linux community, Sun, IBM, and Google. In the long run, I think open content will win out over content that is protected by DRM. While major corporations take any means necessary to protect their content, consumers will switch to media that is produced under a license similar to Creative Commons.

Tuesday, January 17, 2006

Cell Phone Records

On last weeks episode of Off the Hook (January 12, 2005), Emmanuel and the other hosts of the show mentioned that it is now possible to purchase cell phone records. I decided to look around the web and see if I could find any sites that were offering this service. I typed in "cell phone records" in a search engine and it turned up a number of such sites. Locatecell.com is one such site, they offers the last one hundred outgoing calls a person made for $110. What you receive is a list of the numbers that were called by the person. Incoming calls may also be available. The company will also provide information such as the length call for an additional fee. All that is needed to receive another persons cell phone records is their phone number.

The selling of cell phone records is definitely an invasion of privacy. I didn't see for sale the location of the cell towers used during the call, but I'm sure companies will start selling that data too. With the frequent use of cell phones, this information makes it possible to monitor someones daily activities. I'm not really sure how this information is gathered. The cell phone industry maybe selling this personal information to third parties such as locatecell.com. If they aren't directly responsible for the leaking of the records, they most have some knowledge of this practice and allow it to continue.

Thursday, January 12, 2006

Vehicular Ad Hoc Network Project

To finish my master degree in computer science, I need to either complete a research project or a thesis. I stumbled around with ideas for a project for a few months. My first thought was to create an open source application that would be usable by other, instead of doing a paper which will probably never be read. On the other hand, I like the research issue that is involved in computer science and I was thinking about going for a PHD, so the research experience would be beneficial. In the end I decided on doing a research project.

The topic I decided I would like to focus on is computer networks. I always did well in the network classes I have taken. In both of network classes I had taken at U of M Dearborn, the grade I received was an A+. I always found networks to be a fascinating subject. For instance, I like to know how different protocols work and what different field in the headers control.

In September 2005, I emailed a professor that I had for one of my network classes, asking him for some research topic ideas. The professor recommend the topic of vehicular ad hoc networks (VANET). He said that it isn't the most refined area of study and other topics may be easier, but there are a lot of challenging issues to still be addressed. At first, I had some problems locating information on the topic. I discovered an ACM conference that published quite a few papers on the topic of VANET. Google scholar is another place where is was able to locate academic papers on the topic. Over the fall I read a number of papers on the subject and purchased a few books to familiarize myself with some of the problems associated with ad hoc networks. The first book that I read on ad hoc networks was Mobile Ad Hoc Networks: From Wireless LANs to 4G Networks by George Aggelou, the book is alright but it focuses heavily on cellular technology. The book I would recommend is Ad Hoc Wireless Networks : Architectures and Protocols by C. Siva Ram Murthy and B.S. Manoj. The book is kind of dry to read but it goes into the protocols in greater depth.

During the fall I was spending most of my time working and didn't get to spend as much time as I would have liked to on the project. I did spend a considerable amount of time reading, but I didn't really do any work that could be measured on paper.

I found VANETs to be a very interesting topic. At first, it may not sound that useful to have a network of cars, but there are numerous uses for this technology. One of the greatest benefits of vehicular networks is they can provide the driver with additional safety. An example of this extra saftey is a warning message could be sent to the driver, if the car in front of them is braking abruptly. Also, information could be relayed concerning hazardous conditions such as an accident or ice on the road. There are also a number of non-safety applications which would also enhance the drivers experience such as multimedia applications.

Many of the problems that arise in VANETs are the same as those in mobilie ad hoc networks (MANET). There are some solutions that are applicable in MANETs which are not in VANETs. On area that is different between the two types of networks is in VANETs the vehicle have a somewhat predictable pattern. For Instance, it is unlikely a vehicle will travel where a road does not exist. Another difference is that the problem of battery life doesn't come into place in VANETs, since a car battery is a constant source of energy. In a vehicular network the nodes are highly mobile, two cars may not be in transmission range of each other for long periods of time. The average connectivity between two nodes in a VANET is around a minute. These are just a few of the issues that are involved in vehicular ad hoc networks.

As I continue to research this topic, I will post new material.

Monday, January 09, 2006

Young Adults Leaving Michigan

I came across an article from the Detroit News, "Exodus from Michigan". I agree with much of the article. From what I see, Michigan is losing mostly young college graduates. Most of my friends have left the state for better job opportunities. Only a small portion my friends from high school are left here. Most of the people who I see stick around are people who never graduated college.

Most of the jobs in the Detroit area are centered around the failing auto industry. The state has done a poor job of attracting new businesses to the area. The governor and mayor have been pushing on the public that having the city hosting the all star game and super bowl will help turn the area around. I don't really see how this is possible. Certain industries may benefit in the short term from these events. Some companies will see some quick cash come in, but it isn't a long term solution to the states problems. Having these events will benefit only the hotels, the casinos and the restaurants in the area. Most of the workers in these industries are low paid anyway. Another solution proposed by the governor is to create cool cities. The cool cities are basically cities with a lot of night life i.e. bars. If the economy is growing these types of cities will naturally develop.

Whenever a survey comes out that measures the quality of life throughout the country, Michigan does very poorly. In the news paper, over the past year I have seen various stories which rate Michigan or Detroit poorly in different aspects. Detroit was rated the most out of shape city in the country. Detroit is one of the most segregated cities in the country. Michigan has the second highest unemployment rate in the country. Detroit was rated one of the worst cities in the country for dating. The list goes on.

I am seriously considering relocating to another state. I think my opportunities would be much better than if I stayed in the Detroit area.

TWAT

TWAT is podcast which was started by Droops from Infonomicon. The name of the show is a spoof on TWIT. The show published most days of the week and each day a different person discusses a new topic, for approximately 15 minutes. The shows is a very informative. Most of the people presenting material on the show are from other podcasts, but Droops encourages the listeners to submit material to be aired on the show. The nice part about TWAT is the topics discussed are usually more technical than other podcasts. Some of the recent topics were: Asterisks, remote logins, Windows passwords, and hacking bluetooth. I would encourage people to check out the show.

Friday, January 06, 2006

Debian Based Distros



In the past, I heard that Debian based Linux distributions were harder to use from various people. For a long period of time, I never tried out Debian. Most of the time, I would install Redhat, Fedora or Mandrake because I was used to them. For the most part, these distributions worked fine for me. The problem I would repeatedly have with these distros was resolving dependencies, when I was installing rpms. I would have to search the web for the missing software and usually when I found it, I would have the same problem all over again.

I installed Debian about a year ago. Since that time, I have yet to go back to using any RPM based distros. I now have Debian 3.1 installed on one computer and Ubuntu 5.04 on my laptop and Ubuntu 5.10 on my main computer. A few weeks ago I had an extra hard drive so, I decided I would try out SUSE (to see how it is after they were bought by Novel). SUSE has a very nice user interface. Using SUSE I encountered a lot of the same problems I would have with Fedora. In the end I decided to remove SUSE and install Debian. SUSE is a distro to watch for in the future, since it is backed by Novel.

I feel Debian and Ubuntu are much easier to use. If I want to add a program to my computer I use Synaptic and the program is up and running quickly. I don't enjoy spending hours trying to get a program to work. The only real problem I have with Ubuntu is most of the development tools are left out of the default installation but it is really that big of an issue.

Lately, I have seen many articles discussing the acceptance of open source software. I'm not really sure why some many in the open source community are so fascinated with mainstream users adopting Linux. I'm a user of open source software and it doesn't really matter to me what acceptance rate Firefox has achieved. If people want to pay $100 every few years to upgrade their version of Windows or Mac it is their choice. It seems to me, the customers are mostly paying to get features that are already implemented in Linux.

Occasionally people will come over to use my PC who don't have any experience using Linux. All of them are able quickly pick up how to use desktop Linux. None of these people know an UNIX commands, but with the desktop environment their able to use it with the same proficiency as any other OS. By the way, I agree with Linus in the recent controversy he created in the KDE vs Gnome debate. KDE is much more powerful than Gnome, but everyone has their personal preferences.

In the future if anyone were to come to me and ask my opinion of what OS to run I would have to tell them to stick to a Debian based distro.

Here is a picture of my current desktop.

Political Affiliation Test

I took an on line test to determine what my political affiliation is. Before I took the test I was pretty sure that I would be considered a conservative, but I figured I might as well take since I may learn something about myself. The results of the test were really no surprise to me. I would consider myself republican. Most of the news and info I get is from the conservative side. In the media I prefer to listen to people such as Rush Limbaugh, Sean Hannity and Bill O'Reilly, since they share many of my view points. I tend to agree more with the republican economic policies as compared to their social policies. I definitely have a problem with the republican leaders trying to justify spying on US citizens in the name of national security.

Here are the results from my test:

You are a

Social Conservative
(28% permissive)

and an...

Economic Conservative
(70% permissive)

You are best described as a:

Republican




Link: The Politics Test on Ok Cupid

On-Line Geek Test

On my friend Dave's blog, he had a few links to online tests which determine different aspects of a person's personality. I decided to take some of the tests that he had links to. The first test that I decided to take was one that would determine whether I was a nerd, geek or dork. I wasn't really sure what the difference was between them. I figured I had to be considered either a nerd or a geek, since most of the things that I like others would consider nerdy or geeky.

The one thing I noticed about the test is there were a lot of questions pertaining to either TV or movies. I tend not to watch much TV besides sports or the news stations. I guess the dorks are the people that devoutly follow movies or TV. I kind of see where the creators of the test are coming from with the Star Wars crowd. I don't really see why the American public looks up to actors and actresses so much. I'm kind of sick of Hollywood pushing their way of life and political views on people. A majority of the American public is easily influenced by these people, which is kind of a shame that they can't think on their own and they need to have their favorite Hollywood idol tell them how to think. But that is another topic.

The test determined that I was a cool nerd. Which made me remember a hacker documentary I had seen, where the woman being interviewing said that she was nerd but at least she had a six figure salary to go along with being a nerd. So, I guess it isn't all that bad being considered a nerd.

Here is the result of the test, for whatever it is worth.

Modern, Cool Nerd
69 % Nerd, 60% Geek, 21% Dork
For The Record:

A Nerd is someone who is passionate about learning/being smart/academia.

A Geek is someone who is passionate about some particular area or subject, often an obscure or difficult one.

A Dork is someone who has difficulty with common social expectations/interactions.

You scored better than half in Nerd and Geek, earning you the title of: Modern, Cool Nerd.

Nerds didn't use to be cool, but in the 90's that all changed. It used to be that, if you were a computer expert, you had to wear plaid or a pocket protector or suspenders or something that announced to the world that you couldn't quite fit in. Not anymore. Now, the intelligent and geeky have eked out for themselves a modicum of respect at the very least, and "geek is chic." The Modern, Cool Nerd is intelligent, knowledgable and always the person to call in a crisis (needing computer advice/an arcane bit of trivia knowledge). They are the one you want as your lifeline in Who Wants to Be a Millionaire (or the one up there, winning the million bucks)!

Congratulations!

Thanks Again! -- THE NERD? GEEK? OR DORK? TEST

My test tracked 3 variables How you compared to other people your age and gender:
You scored higher than 76% on nerdiness
You scored higher than 83% on geekosity
You scored higher than 27% on dork points
Link: The Nerd? Geek? or Dork? Test written by donathos