-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Mastering Concurrency Programming with Java 9, Second Edition
By :

Programming a concurrent application is not an easy job. Incorrect use of the synchronization mechanisms can create different problems with the tasks in your application. In this section, we describe some of these problems.
You can have a data race (also named race condition) in your application when you have two or more tasks writing a shared variable outside a critical section, that's to say, without using any synchronization mechanisms.
Under these circumstances, the final result of your application may depend on the order or execution of the tasks. Look at the following example:
package com.packt.java.concurrency; public class Account { private float balance; public void modify (float difference) { float value=this.balance; this.balance=value+difference; } }
Imagine that two different tasks execute the modify()
method in the same Account
object. Depending on the order of execution of the sentences in the tasks, the final result can vary. Suppose that the initial balance is 1000 and the two tasks call the modify()
method with 1000 as a parameter. The final result should be 3000, but if both tasks execute the first sentence at the same time and then the second sentence at the same time, the final result will be 2000. As you can see, the modify()
method is not atomic and the Account
class is not thread-safe.
There is a deadlock in your concurrent application when there are two or more tasks waiting for a shared resource that must be free from another thread that is waiting for another shared resource that must be free by one of the first ones. It happens when four conditions happen simultaneously in the system. They are the Coffman conditions, which are as follows:
Some mechanisms exist that you can use to avoid deadlocks:
A livelock occurs when you have two tasks in your system that are always changing their states due to the actions of the other. Consequently, they are in a loop of state changes and unable to continue.
For example, you have two tasks - Task 1 and Task 2, and both need two resources - Resource 1 and Resource 2. Suppose that Task 1 has a lock on Resource 1, and Task 2 has a lock on Resource 2. As they are unable to gain access to the resource they need, they free their resources and begin the cycle again. This situation can continue indefinitely, so the tasks will never end their execution.
Resource starvation occurs when you have a task in your system that never gets a resource that it needs to continue with its execution. When there is more than one task waiting for a resource and the resource is released, the system has to choose the next task that can use it. If your system doesn't have a good algorithm, it can have threads that are waiting for a long time for the resource.
Fairness is the solution to this problem. All the tasks that are waiting for a resource must have the resource in a given period of time. An option is to implement an algorithm that takes into account the time that a task has been waiting for a resource when it chooses the next task that will hold a resource. However, fair implementation of locks requires additional overhead, which may lower your program throughput.
Priority inversion occurs when a low priority task holds a resource that is needed by a high priority task, so the low priority task finishes its execution before the high priority task.