8354213: Restore pointless unicode characters to ASCII
Reviewed-by: naoto, erikj, iris
This commit is contained in:
parent
776e1cf1df
commit
4a242e3a65
@ -1,3 +1,3 @@
|
|||||||
# Contributing to the JDK
|
# Contributing to the JDK
|
||||||
|
|
||||||
Please see the [OpenJDK Developers’ Guide](https://openjdk.org/guide/).
|
Please see the [OpenJDK Developers' Guide](https://openjdk.org/guide/).
|
||||||
|
@ -106,7 +106,7 @@ Prefer having checks inside test code.
|
|||||||
|
|
||||||
Not only does having test logic outside, e.g. verification method,
|
Not only does having test logic outside, e.g. verification method,
|
||||||
depending on asserts in product code contradict with several items
|
depending on asserts in product code contradict with several items
|
||||||
above but also decreases test’s readability and stability. It is much
|
above but also decreases test's readability and stability. It is much
|
||||||
easier to understand that a test is testing when all testing logic is
|
easier to understand that a test is testing when all testing logic is
|
||||||
located inside a test or nearby in shared test libraries. As a rule of
|
located inside a test or nearby in shared test libraries. As a rule of
|
||||||
thumb, the closer a check to a test, the better.
|
thumb, the closer a check to a test, the better.
|
||||||
@ -119,7 +119,7 @@ Prefer `EXPECT` over `ASSERT` if possible.
|
|||||||
|
|
||||||
This is related to the [informativeness](#informativeness) property of
|
This is related to the [informativeness](#informativeness) property of
|
||||||
tests, information for other checks can help to better localize a
|
tests, information for other checks can help to better localize a
|
||||||
defect’s root-cause. One should use `ASSERT` if it is impossible to
|
defect's root-cause. One should use `ASSERT` if it is impossible to
|
||||||
continue test execution or if it does not make much sense. Later in
|
continue test execution or if it does not make much sense. Later in
|
||||||
the text, `EXPECT` forms will be used to refer to both
|
the text, `EXPECT` forms will be used to refer to both
|
||||||
`ASSERT/EXPECT`.
|
`ASSERT/EXPECT`.
|
||||||
@ -160,7 +160,7 @@ value of the difference between `v1` and `v2` is not greater than `eps`.
|
|||||||
|
|
||||||
Use string special macros for C strings comparisons.
|
Use string special macros for C strings comparisons.
|
||||||
|
|
||||||
`EXPECT_EQ` just compares pointers’ values, which is hardly what one
|
`EXPECT_EQ` just compares pointers' values, which is hardly what one
|
||||||
wants comparing C strings. GoogleTest provides `EXPECT_STREQ` and
|
wants comparing C strings. GoogleTest provides `EXPECT_STREQ` and
|
||||||
`EXPECT_STRNE` macros to compare C string contents. There are also
|
`EXPECT_STRNE` macros to compare C string contents. There are also
|
||||||
case-insensitive versions `EXPECT_STRCASEEQ`, `EXPECT_STRCASENE`.
|
case-insensitive versions `EXPECT_STRCASEEQ`, `EXPECT_STRCASENE`.
|
||||||
@ -226,7 +226,7 @@ subsystem, etc.
|
|||||||
|
|
||||||
This naming scheme helps to find tests, filter them and simplifies
|
This naming scheme helps to find tests, filter them and simplifies
|
||||||
test failure analysis. For example, class `Foo` - test group `Foo`,
|
test failure analysis. For example, class `Foo` - test group `Foo`,
|
||||||
compiler logging subsystem - test group `CompilerLogging`, G1 GC — test
|
compiler logging subsystem - test group `CompilerLogging`, G1 GC - test
|
||||||
group `G1GC`, and so forth.
|
group `G1GC`, and so forth.
|
||||||
|
|
||||||
### Filename
|
### Filename
|
||||||
@ -287,7 +287,7 @@ Fixture classes should be named after tested classes, subsystems, etc
|
|||||||
|
|
||||||
All test purpose friends should have either `Test` or `Testable` suffix.
|
All test purpose friends should have either `Test` or `Testable` suffix.
|
||||||
|
|
||||||
It greatly simplifies understanding of friendship’s purpose and allows
|
It greatly simplifies understanding of friendship's purpose and allows
|
||||||
statically check that private members are not exposed unexpectedly.
|
statically check that private members are not exposed unexpectedly.
|
||||||
Having `FooTest` as a friend of `Foo` without any comments will be
|
Having `FooTest` as a friend of `Foo` without any comments will be
|
||||||
understood as a necessary evil to get testability.
|
understood as a necessary evil to get testability.
|
||||||
@ -397,7 +397,7 @@ and filter out inapplicable tests.
|
|||||||
Restore changed flags.
|
Restore changed flags.
|
||||||
|
|
||||||
It is quite common for tests to configure JVM in a certain way
|
It is quite common for tests to configure JVM in a certain way
|
||||||
changing flags’ values. GoogleTest provides two ways to set up
|
changing flags' values. GoogleTest provides two ways to set up
|
||||||
environment before a test and restore it afterward: using either
|
environment before a test and restore it afterward: using either
|
||||||
constructor and destructor or `SetUp` and `TearDown` functions. Both ways
|
constructor and destructor or `SetUp` and `TearDown` functions. Both ways
|
||||||
require to use a test fixture class, which sometimes is too wordy. The
|
require to use a test fixture class, which sometimes is too wordy. The
|
||||||
@ -406,7 +406,7 @@ be used in such cases to restore/set values.
|
|||||||
|
|
||||||
Caveats:
|
Caveats:
|
||||||
|
|
||||||
* Changing a flag’s value could break the invariants between flags' values and hence could lead to unexpected/unsupported JVM state.
|
* Changing a flag's value could break the invariants between flags' values and hence could lead to unexpected/unsupported JVM state.
|
||||||
|
|
||||||
* `FLAG_SET_*` macros can change more than one flag (in order to
|
* `FLAG_SET_*` macros can change more than one flag (in order to
|
||||||
maintain invariants) so it is hard to predict what flags will be
|
maintain invariants) so it is hard to predict what flags will be
|
||||||
|
@ -87,7 +87,7 @@ void RangeCheckStub::emit_code(LIR_Assembler* ce) {
|
|||||||
__ mv(t1, _array->as_pointer_register());
|
__ mv(t1, _array->as_pointer_register());
|
||||||
stub_id = C1StubId::throw_range_check_failed_id;
|
stub_id = C1StubId::throw_range_check_failed_id;
|
||||||
}
|
}
|
||||||
// t0 and t1 are used as args in generate_exception_throw,
|
// t0 and t1 are used as args in generate_exception_throw,
|
||||||
// so use x1/ra as the tmp register for rt_call.
|
// so use x1/ra as the tmp register for rt_call.
|
||||||
__ rt_call(Runtime1::entry_for(stub_id), ra);
|
__ rt_call(Runtime1::entry_for(stub_id), ra);
|
||||||
ce->add_call_info_here(_info);
|
ce->add_call_info_here(_info);
|
||||||
|
@ -275,7 +275,7 @@ void BarrierSetAssembler::nmethod_entry_barrier(MacroAssembler* masm, Label* slo
|
|||||||
// order, while allowing other independent instructions to be reordered.
|
// order, while allowing other independent instructions to be reordered.
|
||||||
// Note: This may be slower than using a membar(load|load) (fence r,r).
|
// Note: This may be slower than using a membar(load|load) (fence r,r).
|
||||||
// Because processors will not start the second load until the first comes back.
|
// Because processors will not start the second load until the first comes back.
|
||||||
// This means you can’t overlap the two loads,
|
// This means you can't overlap the two loads,
|
||||||
// which is stronger than needed for ordering (stronger than TSO).
|
// which is stronger than needed for ordering (stronger than TSO).
|
||||||
__ srli(ra, t0, 32);
|
__ srli(ra, t0, 32);
|
||||||
__ orr(t1, t1, ra);
|
__ orr(t1, t1, ra);
|
||||||
|
@ -670,9 +670,9 @@ class MacroAssembler: public Assembler {
|
|||||||
// JALR, return address stack updates:
|
// JALR, return address stack updates:
|
||||||
// | rd is x1/x5 | rs1 is x1/x5 | rd=rs1 | RAS action
|
// | rd is x1/x5 | rs1 is x1/x5 | rd=rs1 | RAS action
|
||||||
// | ----------- | ------------ | ------ |-------------
|
// | ----------- | ------------ | ------ |-------------
|
||||||
// | No | No | — | None
|
// | No | No | - | None
|
||||||
// | No | Yes | — | Pop
|
// | No | Yes | - | Pop
|
||||||
// | Yes | No | — | Push
|
// | Yes | No | - | Push
|
||||||
// | Yes | Yes | No | Pop, then push
|
// | Yes | Yes | No | Pop, then push
|
||||||
// | Yes | Yes | Yes | Push
|
// | Yes | Yes | Yes | Push
|
||||||
//
|
//
|
||||||
|
@ -62,7 +62,7 @@ address Disassembler::decode_instruction0(address here, outputStream * st, addre
|
|||||||
|
|
||||||
if (Assembler::is_z_nop((long)instruction_2bytes)) {
|
if (Assembler::is_z_nop((long)instruction_2bytes)) {
|
||||||
#if 1
|
#if 1
|
||||||
st->print("nop "); // fill up to operand column, leads to better code comment alignment
|
st->print("nop "); // fill up to operand column, leads to better code comment alignment
|
||||||
next = here + 2;
|
next = here + 2;
|
||||||
#else
|
#else
|
||||||
// Compact disassembler output. Does not work the easy way.
|
// Compact disassembler output. Does not work the easy way.
|
||||||
@ -76,7 +76,7 @@ address Disassembler::decode_instruction0(address here, outputStream * st, addre
|
|||||||
instruction_2bytes = *(uint16_t*)(here+2*n_nops);
|
instruction_2bytes = *(uint16_t*)(here+2*n_nops);
|
||||||
}
|
}
|
||||||
if (n_nops <= 4) { // do not group few subsequent nops
|
if (n_nops <= 4) { // do not group few subsequent nops
|
||||||
st->print("nop "); // fill up to operand column, leads to better code comment alignment
|
st->print("nop "); // fill up to operand column, leads to better code comment alignment
|
||||||
next = here + 2;
|
next = here + 2;
|
||||||
} else {
|
} else {
|
||||||
st->print("nop count=%d", n_nops);
|
st->print("nop count=%d", n_nops);
|
||||||
|
@ -6581,7 +6581,7 @@ instruct mulHiL_reg_reg(revenRegL Rdst, roddRegL Rsrc1, iRegL Rsrc2, iRegL Rtmp1
|
|||||||
Register tmp1 = $Rtmp1$$Register;
|
Register tmp1 = $Rtmp1$$Register;
|
||||||
Register tmp2 = $Rdst$$Register;
|
Register tmp2 = $Rdst$$Register;
|
||||||
// z/Architecture has only unsigned multiply (64 * 64 -> 128).
|
// z/Architecture has only unsigned multiply (64 * 64 -> 128).
|
||||||
// implementing mulhs(a,b) = mulhu(a,b) – (a & (b>>63)) – (b & (a>>63))
|
// implementing mulhs(a,b) = mulhu(a,b) - (a & (b>>63)) - (b & (a>>63))
|
||||||
__ z_srag(tmp2, src1, 63); // a>>63
|
__ z_srag(tmp2, src1, 63); // a>>63
|
||||||
__ z_srag(tmp1, src2, 63); // b>>63
|
__ z_srag(tmp1, src2, 63); // b>>63
|
||||||
__ z_ngr(tmp2, src2); // b & (a>>63)
|
__ z_ngr(tmp2, src2); // b & (a>>63)
|
||||||
|
@ -332,7 +332,7 @@ typedef struct { /* component perfstat_cpu_t from AIX 7.2 documentation */
|
|||||||
u_longlong_t busy_stolen_purr; /* Number of busy cycles stolen by the hypervisor from a dedicated partition. */
|
u_longlong_t busy_stolen_purr; /* Number of busy cycles stolen by the hypervisor from a dedicated partition. */
|
||||||
u_longlong_t busy_stolen_spurr; /* Number of busy spurr cycles stolen by the hypervisor from a dedicated partition.*/
|
u_longlong_t busy_stolen_spurr; /* Number of busy spurr cycles stolen by the hypervisor from a dedicated partition.*/
|
||||||
u_longlong_t shcpus_in_sys; /* Number of physical processors allocated for shared processor use, across all shared processors pools. */
|
u_longlong_t shcpus_in_sys; /* Number of physical processors allocated for shared processor use, across all shared processors pools. */
|
||||||
u_longlong_t entitled_pool_capacity; /* Entitled processor capacity of partition’s pool. */
|
u_longlong_t entitled_pool_capacity; /* Entitled processor capacity of partition's pool. */
|
||||||
u_longlong_t pool_max_time; /* Summation of maximum time that can be consumed by the pool (nanoseconds). */
|
u_longlong_t pool_max_time; /* Summation of maximum time that can be consumed by the pool (nanoseconds). */
|
||||||
u_longlong_t pool_busy_time; /* Summation of busy (nonidle) time accumulated across all partitions in the pool (nanoseconds). */
|
u_longlong_t pool_busy_time; /* Summation of busy (nonidle) time accumulated across all partitions in the pool (nanoseconds). */
|
||||||
u_longlong_t pool_scaled_busy_time; /* Scaled summation of busy (nonidle) time accumulated across all partitions in the pool (nanoseconds). */
|
u_longlong_t pool_scaled_busy_time; /* Scaled summation of busy (nonidle) time accumulated across all partitions in the pool (nanoseconds). */
|
||||||
|
@ -295,7 +295,7 @@ DECLARE_FUNC(aarch64_atomic_cmpxchg_8_relaxed_default_impl):
|
|||||||
ret
|
ret
|
||||||
|
|
||||||
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
||||||
* For more details see "ELF for the Arm® 64-bit Architecture (AArch64)".
|
* For more details see "ELF for the Arm(R) 64-bit Architecture (AArch64)".
|
||||||
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
||||||
*/
|
*/
|
||||||
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
||||||
|
@ -269,7 +269,7 @@ bwd_copy_drain:
|
|||||||
ret
|
ret
|
||||||
|
|
||||||
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
||||||
* For more details see "ELF for the Arm® 64-bit Architecture (AArch64)".
|
* For more details see "ELF for the Arm(R) 64-bit Architecture (AArch64)".
|
||||||
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
||||||
*/
|
*/
|
||||||
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
||||||
|
@ -50,7 +50,7 @@ DECLARE_FUNC(_SafeFetchN_continuation):
|
|||||||
ret
|
ret
|
||||||
|
|
||||||
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
||||||
* For more details see "ELF for the Arm® 64-bit Architecture (AArch64)".
|
* For more details see "ELF for the Arm(R) 64-bit Architecture (AArch64)".
|
||||||
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
||||||
*/
|
*/
|
||||||
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
||||||
|
@ -46,7 +46,7 @@ DECLARE_FUNC(_ZN10JavaThread25aarch64_get_thread_helperEv):
|
|||||||
.size _ZN10JavaThread25aarch64_get_thread_helperEv, .-_ZN10JavaThread25aarch64_get_thread_helperEv
|
.size _ZN10JavaThread25aarch64_get_thread_helperEv, .-_ZN10JavaThread25aarch64_get_thread_helperEv
|
||||||
|
|
||||||
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
/* Emit .note.gnu.property section in case of PAC or BTI being enabled.
|
||||||
* For more details see "ELF for the Arm® 64-bit Architecture (AArch64)".
|
* For more details see "ELF for the Arm(R) 64-bit Architecture (AArch64)".
|
||||||
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
* https://github.com/ARM-software/abi-aa/blob/main/aaelf64/aaelf64.rst
|
||||||
*/
|
*/
|
||||||
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
#ifdef __ARM_FEATURE_BTI_DEFAULT
|
||||||
|
@ -54,13 +54,13 @@ inline void OrderAccess::fence() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
inline void OrderAccess::cross_modify_fence_impl() {
|
inline void OrderAccess::cross_modify_fence_impl() {
|
||||||
// From 3 “Zifencei” Instruction-Fetch Fence, Version 2.0
|
// From 3 "Zifencei" Instruction-Fetch Fence, Version 2.0
|
||||||
// "RISC-V does not guarantee that stores to instruction memory will be made
|
// "RISC-V does not guarantee that stores to instruction memory will be made
|
||||||
// visible to instruction fetches on a RISC-V hart until that hart executes a
|
// visible to instruction fetches on a RISC-V hart until that hart executes a
|
||||||
// FENCE.I instruction. A FENCE.I instruction ensures that a subsequent
|
// FENCE.I instruction. A FENCE.I instruction ensures that a subsequent
|
||||||
// instruction fetch on a RISC-V hart will see any previous data stores
|
// instruction fetch on a RISC-V hart will see any previous data stores
|
||||||
// already visible to the same RISC-V hart. FENCE.I does not ensure that other
|
// already visible to the same RISC-V hart. FENCE.I does not ensure that other
|
||||||
// RISC-V harts’ instruction fetches will observe the local hart’s stores in a
|
// RISC-V harts' instruction fetches will observe the local hart's stores in a
|
||||||
// multiprocessor system."
|
// multiprocessor system."
|
||||||
//
|
//
|
||||||
// Hence to be able to use fence.i directly we need a kernel that supports
|
// Hence to be able to use fence.i directly we need a kernel that supports
|
||||||
|
@ -106,7 +106,7 @@ public:
|
|||||||
// within the archive (e.g., InstanceKlass::_name points to a Symbol in the archive). During dumping, we
|
// within the archive (e.g., InstanceKlass::_name points to a Symbol in the archive). During dumping, we
|
||||||
// built a bitmap that marks the locations of all these pointers (using ArchivePtrMarker, see comments above).
|
// built a bitmap that marks the locations of all these pointers (using ArchivePtrMarker, see comments above).
|
||||||
//
|
//
|
||||||
// The contents of the archive assumes that it’s mapped at the default SharedBaseAddress (e.g. 0x800000000).
|
// The contents of the archive assumes that it's mapped at the default SharedBaseAddress (e.g. 0x800000000).
|
||||||
// If the archive ends up being mapped at a different address (e.g. 0x810000000), SharedDataRelocator
|
// If the archive ends up being mapped at a different address (e.g. 0x810000000), SharedDataRelocator
|
||||||
// is used to shift each marked pointer by a delta (0x10000000 in this example), so that it points to
|
// is used to shift each marked pointer by a delta (0x10000000 in this example), so that it points to
|
||||||
// the actually mapped location of the target object.
|
// the actually mapped location of the target object.
|
||||||
|
@ -433,7 +433,7 @@ void Method::set_itable_index(int index) {
|
|||||||
// itable index should be the same as the runtime index.
|
// itable index should be the same as the runtime index.
|
||||||
assert(_vtable_index == itable_index_max - index,
|
assert(_vtable_index == itable_index_max - index,
|
||||||
"archived itable index is different from runtime index");
|
"archived itable index is different from runtime index");
|
||||||
return; // don’t write into the shared class
|
return; // don't write into the shared class
|
||||||
} else {
|
} else {
|
||||||
_vtable_index = itable_index_max - index;
|
_vtable_index = itable_index_max - index;
|
||||||
}
|
}
|
||||||
|
@ -70,7 +70,7 @@ public:
|
|||||||
~G1CardSetTest() { }
|
~G1CardSetTest() { }
|
||||||
|
|
||||||
static uint next_random(uint& seed, uint i) {
|
static uint next_random(uint& seed, uint i) {
|
||||||
// Park–Miller random number generator
|
// Park-Miller random number generator
|
||||||
seed = (seed * 279470273u) % 0xfffffffb;
|
seed = (seed * 279470273u) % 0xfffffffb;
|
||||||
return (seed % i);
|
return (seed % i);
|
||||||
}
|
}
|
||||||
|
@ -82,7 +82,7 @@ void TestReserveMemorySpecial_test() {
|
|||||||
// Instead try reserving after the first reservation.
|
// Instead try reserving after the first reservation.
|
||||||
expected_location = result + large_allocation_size;
|
expected_location = result + large_allocation_size;
|
||||||
actual_location = os::reserve_memory_special(expected_allocation_size, os::large_page_size(), os::large_page_size(), expected_location, false);
|
actual_location = os::reserve_memory_special(expected_allocation_size, os::large_page_size(), os::large_page_size(), expected_location, false);
|
||||||
EXPECT_TRUE(actual_location != nullptr) << "Unexpected reservation failure, can’t verify correct location";
|
EXPECT_TRUE(actual_location != nullptr) << "Unexpected reservation failure, can't verify correct location";
|
||||||
EXPECT_TRUE(actual_location == expected_location) << "Reservation must be at requested location";
|
EXPECT_TRUE(actual_location == expected_location) << "Reservation must be at requested location";
|
||||||
MemoryReleaser m2(actual_location, os::large_page_size());
|
MemoryReleaser m2(actual_location, os::large_page_size());
|
||||||
|
|
||||||
@ -90,7 +90,7 @@ void TestReserveMemorySpecial_test() {
|
|||||||
const size_t alignment = os::large_page_size() * 2;
|
const size_t alignment = os::large_page_size() * 2;
|
||||||
const size_t new_large_size = alignment * 4;
|
const size_t new_large_size = alignment * 4;
|
||||||
char* aligned_request = os::reserve_memory_special(new_large_size, alignment, os::large_page_size(), nullptr, false);
|
char* aligned_request = os::reserve_memory_special(new_large_size, alignment, os::large_page_size(), nullptr, false);
|
||||||
EXPECT_TRUE(aligned_request != nullptr) << "Unexpected reservation failure, can’t verify correct alignment";
|
EXPECT_TRUE(aligned_request != nullptr) << "Unexpected reservation failure, can't verify correct alignment";
|
||||||
EXPECT_TRUE(is_aligned(aligned_request, alignment)) << "Returned address must be aligned";
|
EXPECT_TRUE(is_aligned(aligned_request, alignment)) << "Returned address must be aligned";
|
||||||
MemoryReleaser m3(aligned_request, new_large_size);
|
MemoryReleaser m3(aligned_request, new_large_size);
|
||||||
}
|
}
|
||||||
|
@ -191,7 +191,7 @@ class TestZGCCorrectBarrierElision {
|
|||||||
static void testAllocateThenAtomic(Inner i) {
|
static void testAllocateThenAtomic(Inner i) {
|
||||||
Outer o = new Outer();
|
Outer o = new Outer();
|
||||||
Common.blackhole(o);
|
Common.blackhole(o);
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@ -199,14 +199,14 @@ class TestZGCCorrectBarrierElision {
|
|||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testLoadThenAtomic(Outer o, Inner i) {
|
static void testLoadThenAtomic(Outer o, Inner i) {
|
||||||
Common.blackhole(o.field1);
|
Common.blackhole(o.field1);
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "2" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "2" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testAtomicThenAtomicAnotherField(Outer o, Inner i) {
|
static void testAtomicThenAtomicAnotherField(Outer o, Inner i) {
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
Common.field2VarHandle.getAndSet(o, i);
|
Common.field2VarHandle.getAndSet(o, i);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@ -390,14 +390,14 @@ class TestZGCEffectiveBarrierElision {
|
|||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testStoreThenAtomic(Outer o, Inner i) {
|
static void testStoreThenAtomic(Outer o, Inner i) {
|
||||||
o.field1 = i;
|
o.field1 = i;
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
@IR(counts = { IRNode.Z_LOAD_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_LOAD_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testAtomicThenLoad(Outer o, Inner i) {
|
static void testAtomicThenLoad(Outer o, Inner i) {
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
Common.blackhole(o.field1);
|
Common.blackhole(o.field1);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -405,7 +405,7 @@ class TestZGCEffectiveBarrierElision {
|
|||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
@IR(counts = { IRNode.Z_STORE_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_STORE_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testAtomicThenStore(Outer o, Inner i) {
|
static void testAtomicThenStore(Outer o, Inner i) {
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
o.field1 = i;
|
o.field1 = i;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -413,8 +413,8 @@ class TestZGCEffectiveBarrierElision {
|
|||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.REMAINING, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
@IR(counts = { IRNode.Z_GET_AND_SET_P_WITH_BARRIER_FLAG, Common.ELIDED, "1" }, phase = CompilePhase.FINAL_CODE)
|
||||||
static void testAtomicThenAtomic(Outer o, Inner i) {
|
static void testAtomicThenAtomic(Outer o, Inner i) {
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
Common.field1VarHandle.getAndSet(o, i);
|
Common.field1VarHandle.getAndSet(o, i);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -100,7 +100,7 @@ import jdk.test.lib.Utils;
|
|||||||
* <p>
|
* <p>
|
||||||
* Unless you have reasons to pick a specific distribution, you are encouraged to rely on {@link #ints()},
|
* Unless you have reasons to pick a specific distribution, you are encouraged to rely on {@link #ints()},
|
||||||
* {@link #longs()}, {@link #doubles()} and {@link #floats()}, which will randomly pick an interesting distribution.
|
* {@link #longs()}, {@link #doubles()} and {@link #floats()}, which will randomly pick an interesting distribution.
|
||||||
* This is best practice, because that allows the test to be run under different conditions – maybe only a single
|
* This is best practice, because that allows the test to be run under different conditions - maybe only a single
|
||||||
* distribution can trigger a bug.
|
* distribution can trigger a bug.
|
||||||
*/
|
*/
|
||||||
public final class Generators {
|
public final class Generators {
|
||||||
|
@ -435,7 +435,7 @@ enum NumberType {
|
|||||||
this.rndFnc = rndFnc;
|
this.rndFnc = rndFnc;
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getСType() {
|
public String getCType() {
|
||||||
return cType;
|
return cType;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -443,7 +443,7 @@ enum NumberType {
|
|||||||
return jType;
|
return jType;
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getСConv() {
|
public String getCConv() {
|
||||||
return cConv;
|
return cConv;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -792,9 +792,9 @@ class ParameterListGenerator {
|
|||||||
|
|
||||||
String randomVal = list.get(type).getFnc();
|
String randomVal = list.get(type).getFnc();
|
||||||
|
|
||||||
String ctype = list.get(type).getСType();
|
String ctype = list.get(type).getCType();
|
||||||
String jtype = list.get(type).getJType();
|
String jtype = list.get(type).getJType();
|
||||||
String cconv = list.get(type).getСConv();
|
String cconv = list.get(type).getCConv();
|
||||||
String jconv = list.get(type).getJConv();
|
String jconv = list.get(type).getJConv();
|
||||||
|
|
||||||
String varName = "p" + cnt;
|
String varName = "p" + cnt;
|
||||||
|
@ -91,7 +91,7 @@ import java.util.Vector;
|
|||||||
* This bug is largely unnoticed because most {@code Raster.create}
|
* This bug is largely unnoticed because most {@code Raster.create}
|
||||||
* methods actually create {@link WritableRaster} instances, even
|
* methods actually create {@link WritableRaster} instances, even
|
||||||
* when the user did not asked for writable raster. To make this
|
* when the user did not asked for writable raster. To make this
|
||||||
* bug apparent, we need to invoke {@code Raster.createRaster(…)}
|
* bug apparent, we need to invoke {@code Raster.createRaster(...)}
|
||||||
* with a sample model for which no optimization is provided.
|
* with a sample model for which no optimization is provided.
|
||||||
*/
|
*/
|
||||||
public class TiledImage implements RenderedImage {
|
public class TiledImage implements RenderedImage {
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8"?>
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" elementFormDefault="unqualified" attributeFormDefault="unqualified" version="1.0">
|
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" elementFormDefault="unqualified" attributeFormDefault="unqualified" version="1.0">
|
||||||
<xs:element name="recording">
|
<xs:element name="recording">
|
||||||
<xs:complexType>
|
<xs:complexType>
|
||||||
|
Loading…
x
Reference in New Issue
Block a user