What implementation detail makes this code fail so easily?(是什么实现细节让这段代码如此容易失败?)
问题描述
这个问题不是关于 HashMap
不是线程安全的众所周知和记录的事实,而是关于它在 HotSpot 和 JDK 代码上的特定故障模式.我很惊讶这段代码很容易因 NPE 而失败:
This question is not about the well-known and documented fact that HashMap
is not thread-safe, but about its specific failure modes on HotSpot and JDK code. I am surprised by how readily this code fails with an NPE:
public static void main(String[] args) {
Map<Integer, Integer> m = new HashMap<>(0, 0.75f);
IntStream.range(0, 5).parallel().peek(i -> m.put(i, i)).map(m::get).count();
}
NPE 的来源并不神秘:在 .map(m::get)
步骤中,同时尝试对 null
进行拆箱.它在 5 次运行中大约有 4 次失败.
There is no mystery as to where the NPE comes from: in the .map(m::get)
step while trying to unbox a null
. It fails in about 4 out of 5 runs.
在我的机器上 Runtime#availableProcessors()
报告 8,所以大概长度为 5 的范围被分成 5 个子任务,每个子任务只有一个成员.我还假设我的代码以解释模式运行.它可能会调用 JIT 编译的 HashMap
或 Stream
方法,但解释了顶层,因此排除了加载 HashMap
状态的任何变体进入线程本地内存(寄存器/堆栈),从而延迟另一个线程对更新的观察.如果五个 put
操作中的一些操作在不同的内核上同时执行,我不认为它会破坏 HashMap
的内部结构.鉴于工作量很少,单个任务的时间安排必须非常精确.
On my machine Runtime#availableProcessors()
reports 8, so presumably the range of length 5 is split into 5 subtasks, each with just a single member. I also assume my code runs in interpreted mode. It might be calling into JIT-compiled HashMap
or Stream
methods, but the top level is interpreted, therefore precluding any variations where HashMap
state is loaded into thread-local memory (registers/stack), thus delaying the observation of updates by another thread. If some of the five put
operations don't execute literally during the same time on different cores, I don't expect it to destroy the HashMap
s internal structure. The timing of individual tasks must be extremely precise given the little amount of work.
真的是精确的时间(commonPool
的线程必须取消停放),还是有其他途径导致 Oracle/OpenJDK HotSpot 失败?我现在的版本是
Is it really the precise timing (commonPool
's threads must be unparked), or is there another route to cause this to fail on Oracle/OpenJDK HotSpot? My current version is
java version "1.8.0_72"
Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)
更新:我发现即使只进行 两次 插入也有同样高的失败率:
UPDATE: I find that even making just two insertions has a similarly high failure rate:
IntStream.range(0, 2).parallel().peek(i -> m.put(i, i)).map(m::get).count();
推荐答案
首先,它没有可靠地失败.我设法进行了一些没有发生异常的运行.然而,这并不意味着生成的地图是正确的.也有可能每个线程都见证了自己的值被成功放置,而生成的映射错过了几个映射.
First, it’s not failing reliably. I managed to have some runs where no exception occurred. This, however doesn’t imply that the resulting map is correct. It’s also possible that each thread witnesses its own value being successfully put, while the resulting map misses several mappings.
但确实,NullPointerException
失败的情况经常发生.我创建了以下调试代码来说明 HashMap
的工作原理:
But indeed, failing with a NullPointerException
happens quite often. I created the following debug code to illustrate the HashMap
’s working:
static <K,V> void debugPut(HashMap<K,V> m, K k, V v) {
if(m.isEmpty()) debug(m);
m.put(k, v);
debug(m);
}
private static <K, V> void debug(HashMap<K, V> m) {
for(Field f: FIELDS) try {
System.out.println(f.getName()+": "+f.get(m));
} catch(ReflectiveOperationException ex) {
throw new AssertionError(ex);
}
System.out.println();
}
static final Field[] FIELDS;
static {
String[] name={ "table", "size", "threshold" };
Field[] f=new Field[name.length];
for (int ix = 0; ix < name.length; ix++) try {
f[ix]=HashMap.class.getDeclaredField(name[ix]);
}
catch (NoSuchFieldException ex) {
throw new ExceptionInInitializerError(ex);
}
AccessibleObject.setAccessible(f, true);
FIELDS=f;
}
将此与简单的顺序 for(int i=0; i<5; i++) debugPut(m, i, i);
一起使用:
Using this with the simple sequential for(int i=0; i<5; i++) debugPut(m, i, i);
printed:
table: null
size: 0
threshold: 1
table: [Ljava.util.HashMap$Node;@70dea4e
size: 1
threshold: 1
table: [Ljava.util.HashMap$Node;@5c647e05
size: 2
threshold: 3
table: [Ljava.util.HashMap$Node;@5c647e05
size: 3
threshold: 3
table: [Ljava.util.HashMap$Node;@33909752
size: 4
threshold: 6
table: [Ljava.util.HashMap$Node;@33909752
size: 5
threshold: 6
如您所见,由于 0
的初始容量,即使在顺序操作期间也会创建三个不同的后备数组.每增加一次容量,一个活跃的并发 put
错过数组更新并创建自己的数组的可能性就更高.
As you can see, due to the initial capacity of 0
, there are three different backing arrays created even during the sequential operation. Each time, the capacity is increased, there is a higher chance that a racy concurrent put
misses the array update and creates its own array.
这对于空映射的初始状态和尝试放置其第一个键的多个线程尤其相关,因为所有线程都可能遇到 null
表的初始状态并创建自己的表.此外,即使在读取已完成的第一个 put
的状态时,也会为第二个 put
创建一个新数组.
This is especially relevant for the initial state of an empty map and several threads trying to put their first key, as all threads might encounter the initial state of a null
table and create their own. Also, even when reading the state of a completed first put
, there is a new array created for the second put
as well.
但逐步调试显示出更多的破坏机会:
But step-by-step debugging revealed even more chances of breaking:
在方法putVal
里面,我们看到在末尾:
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
换句话说,在成功插入一个新键之后,如果新的大小超过阈值
,表格将被调整大小.所以在第一个 put
上,resize()
在开始时被调用,因为表是 null
并且你指定的初始容量是 0
,即太低,无法存储一个映射,新容量为1
,新的threshold
为1 * loadFactor == 1 *0.75f == 0.75f
,四舍五入为 0
.因此,在第一个 put
结束时,超出了新的 threshold
并触发了另一个 resize()
操作.因此,在初始容量为 0
的情况下,第一个 put
已经创建并填充了 两个 数组,如果多个线程有更高的中断机会并发执行这个动作,都遇到初始状态.
In other words, after the successful insertion of a new key, the table will get resized, if the new size exceeds the threshold
. So on the first put
, resize()
is called at the beginning because the table is null
and since your specified initial capacity is 0
, i.e. too low to store one mapping, the new capacity will be 1
and the new threshold
will be 1 * loadFactor == 1 * 0.75f == 0.75f
, rounded to 0
. So right at the end of the first put
, the new threshold
is exceeded and another resize()
operation triggered. So with an intial capacity of 0
, the first put
already creates and populates two arrays, which gives much higher chances to break, if multiple threads perform this action concurrently, all encountering the initial state.
还有一点.寻找 进入 resize()
操作 我们看到 行:
And there is another point. Looking into the resize()
operation we see the lines:
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
… (transfer old contents to new array)
换句话说,新的数组引用在被旧条目填充之前被存储到堆中,所以即使没有重新排序读取和写入,另一个线程也有可能读取该引用没有看到旧条目,包括它之前自己编写的条目.实际上,减少堆访问的优化可能会降低线程在紧随其后的查询中看不到自己更新的机会.
In other words, the new array reference is stored into the heap before it has been populated with the old entries, so even without reordering of reads and writes, there is a chance that another thread reads that reference without seeing the old entries, including the one it has written itself previously. Actually, optimizations reducing the heap access may lower the chances of a thread not seeing its own update in an immediately following query.
不过,还必须指出,一切都在这里解释运行的假设是没有根据的.由于 JRE 内部也使用 HashMap
,因此即使在您的应用程序启动之前,使用 HashMap
时也有可能遇到已编译的代码.
Still, it must also noted that the assumption that everything runs interpreted here, is not founded. Since HashMap
is used by the JRE internally as well, even before your application starts, there is also a chance of encountering already compiled code when using HashMap
.
这篇关于是什么实现细节让这段代码如此容易失败?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:是什么实现细节让这段代码如此容易失败?
基础教程推荐
- 降序排序:Java Map 2022-01-01
- Java:带有char数组的println给出乱码 2022-01-01
- 无法使用修饰符“public final"访问 java.util.Ha 2022-01-01
- “未找到匹配项"使用 matcher 的 group 方法时 2022-01-01
- 如何使用 Java 创建 X509 证书? 2022-01-01
- Java Keytool 导入证书后出错,"keytool error: java.io.FileNotFoundException &拒绝访问" 2022-01-01
- FirebaseListAdapter 不推送聊天应用程序的单个项目 - Firebase-Ui 3.1 2022-01-01
- 设置 bean 时出现 Nullpointerexception 2022-01-01
- 减少 JVM 暂停时间 >1 秒使用 UseConcMarkSweepGC 2022-01-01
- 在 Libgdx 中处理屏幕的正确方法 2022-01-01