2020-03-10 02:22:11 +09:00
|
|
|
// Ractor implementation
|
|
|
|
|
|
|
|
#include "ruby/ruby.h"
|
|
|
|
#include "ruby/thread.h"
|
2020-11-17 16:40:47 +09:00
|
|
|
#include "ruby/ractor.h"
|
2020-03-10 02:22:11 +09:00
|
|
|
#include "ruby/thread_native.h"
|
|
|
|
#include "vm_core.h"
|
2023-02-24 18:46:17 +09:00
|
|
|
#include "eval_intern.h"
|
2020-03-10 02:22:11 +09:00
|
|
|
#include "vm_sync.h"
|
2020-11-17 16:40:47 +09:00
|
|
|
#include "ractor_core.h"
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
#include "internal/complex.h"
|
2020-03-10 02:22:11 +09:00
|
|
|
#include "internal/error.h"
|
2023-02-08 11:56:53 +00:00
|
|
|
#include "internal/gc.h"
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
#include "internal/hash.h"
|
Ractor: Fix moving embedded objects
[Bug #20271]
[Bug #20267]
[Bug #20255]
`rb_obj_alloc(RBASIC_CLASS(obj))` will always allocate from the basic
40B pool, so if `obj` is larger than `40B`, we'll create a corrupted
object when we later copy the shape_id.
Instead we can use the same logic than ractor copy, which is
to use `rb_obj_clone`, and later ask the GC to free the original
object.
We then must turn it into a `T_OBJECT`, because otherwise
just changing its class to `RactorMoved` leaves a lot of
ways to keep using the object, e.g.:
```
a = [1, 2, 3]
Ractor.new{}.send(a, move: true)
[].concat(a) # Should raise, but wasn't.
```
If it turns out that `rb_obj_clone` isn't performant enough
for some uses, we can always have carefully crafted specialized
paths for the types that would benefit from it.
2025-03-27 14:26:59 +01:00
|
|
|
#include "internal/object.h"
|
2024-10-23 20:54:38 +01:00
|
|
|
#include "internal/ractor.h"
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
#include "internal/rational.h"
|
2020-09-25 14:06:32 +09:00
|
|
|
#include "internal/struct.h"
|
2020-12-21 02:04:38 +09:00
|
|
|
#include "internal/thread.h"
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
#include "variable.h"
|
2021-03-06 23:46:56 +00:00
|
|
|
#include "yjit.h"
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2020-11-17 16:40:47 +09:00
|
|
|
VALUE rb_cRactor;
|
2023-02-24 18:46:17 +09:00
|
|
|
static VALUE rb_cRactorSelector;
|
2020-12-21 18:06:28 +09:00
|
|
|
|
|
|
|
VALUE rb_eRactorUnsafeError;
|
|
|
|
VALUE rb_eRactorIsolationError;
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE rb_eRactorError;
|
|
|
|
static VALUE rb_eRactorRemoteError;
|
|
|
|
static VALUE rb_eRactorMovedError;
|
|
|
|
static VALUE rb_eRactorClosedError;
|
|
|
|
static VALUE rb_cRactorMovedObject;
|
2020-10-30 00:32:53 +09:00
|
|
|
|
2020-09-04 05:51:55 +09:00
|
|
|
static void vm_ractor_blocking_cnt_inc(rb_vm_t *vm, rb_ractor_t *r, const char *file, int line);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor locking
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
|
|
|
ASSERT_ractor_unlocking(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2025-04-30 14:17:48 -07:00
|
|
|
const rb_execution_context_t *ec = rb_current_ec_noinline();
|
|
|
|
if (ec != NULL && r->sync.locked_by == rb_ractor_self(rb_ec_ractor_ptr(ec))) {
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_bug("recursive ractor locking");
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ASSERT_ractor_locking(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2025-04-30 14:17:48 -07:00
|
|
|
const rb_execution_context_t *ec = rb_current_ec_noinline();
|
|
|
|
if (ec != NULL && r->sync.locked_by != rb_ractor_self(rb_ec_ractor_ptr(ec))) {
|
2020-12-08 00:42:20 +09:00
|
|
|
rp(r->sync.locked_by);
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_bug("ractor lock is not acquired.");
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_lock(rb_ractor_t *r, const char *file, int line)
|
|
|
|
{
|
2023-03-30 02:50:51 +09:00
|
|
|
RUBY_DEBUG_LOG2(file, line, "locking r:%u%s", r->pub.id, rb_current_ractor_raw(false) == r ? " (self)" : "");
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
ASSERT_ractor_unlocking(r);
|
2020-12-08 00:42:20 +09:00
|
|
|
rb_native_mutex_lock(&r->sync.lock);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2025-02-13 15:59:16 +09:00
|
|
|
if (rb_current_execution_context(false) != NULL) {
|
2023-03-30 02:50:51 +09:00
|
|
|
rb_ractor_t *cr = rb_current_ractor_raw(false);
|
|
|
|
r->sync.locked_by = cr ? rb_ractor_self(cr) : Qundef;
|
2020-12-02 23:38:40 -08:00
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
#endif
|
|
|
|
|
2023-03-30 02:50:51 +09:00
|
|
|
RUBY_DEBUG_LOG2(file, line, "locked r:%u%s", r->pub.id, rb_current_ractor_raw(false) == r ? " (self)" : "");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_lock_self(rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
2025-04-30 14:17:48 -07:00
|
|
|
VM_ASSERT(cr == rb_ec_ractor_ptr(rb_current_ec_noinline()));
|
2022-08-23 13:23:40 -07:00
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2020-12-20 01:44:41 +09:00
|
|
|
VM_ASSERT(cr->sync.locked_by != cr->pub.self);
|
2022-08-23 13:23:40 -07:00
|
|
|
#endif
|
2020-03-10 02:22:11 +09:00
|
|
|
ractor_lock(cr, file, line);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_unlock(rb_ractor_t *r, const char *file, int line)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2020-12-08 00:42:20 +09:00
|
|
|
r->sync.locked_by = Qnil;
|
2020-03-10 02:22:11 +09:00
|
|
|
#endif
|
2020-12-08 00:42:20 +09:00
|
|
|
rb_native_mutex_unlock(&r->sync.lock);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-03-30 02:50:51 +09:00
|
|
|
RUBY_DEBUG_LOG2(file, line, "r:%u%s", r->pub.id, rb_current_ractor_raw(false) == r ? " (self)" : "");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_unlock_self(rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
2025-04-30 14:17:48 -07:00
|
|
|
VM_ASSERT(cr == rb_ec_ractor_ptr(rb_current_ec_noinline()));
|
2022-08-23 13:23:40 -07:00
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2020-12-20 01:44:41 +09:00
|
|
|
VM_ASSERT(cr->sync.locked_by == cr->pub.self);
|
2022-08-23 13:23:40 -07:00
|
|
|
#endif
|
2020-03-10 02:22:11 +09:00
|
|
|
ractor_unlock(cr, file, line);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define RACTOR_LOCK(r) ractor_lock(r, __FILE__, __LINE__)
|
|
|
|
#define RACTOR_UNLOCK(r) ractor_unlock(r, __FILE__, __LINE__)
|
|
|
|
#define RACTOR_LOCK_SELF(r) ractor_lock_self(r, __FILE__, __LINE__)
|
|
|
|
#define RACTOR_UNLOCK_SELF(r) ractor_unlock_self(r, __FILE__, __LINE__)
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
void
|
|
|
|
rb_ractor_lock_self(rb_ractor_t *r)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-04-10 10:53:13 +09:00
|
|
|
RACTOR_LOCK_SELF(r);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
void
|
|
|
|
rb_ractor_unlock_self(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
RACTOR_UNLOCK_SELF(r);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor status
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static const char *
|
|
|
|
ractor_status_str(enum ractor_status status)
|
|
|
|
{
|
|
|
|
switch (status) {
|
|
|
|
case ractor_created: return "created";
|
|
|
|
case ractor_running: return "running";
|
|
|
|
case ractor_blocking: return "blocking";
|
|
|
|
case ractor_terminated: return "terminated";
|
|
|
|
}
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_status_set(rb_ractor_t *r, enum ractor_status status)
|
|
|
|
{
|
2020-12-20 01:44:41 +09:00
|
|
|
RUBY_DEBUG_LOG("r:%u [%s]->[%s]", r->pub.id, ractor_status_str(r->status_), ractor_status_str(status));
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
// check 1
|
|
|
|
if (r->status_ != ractor_created) {
|
|
|
|
VM_ASSERT(r == GET_RACTOR()); // only self-modification is allowed.
|
|
|
|
ASSERT_vm_locking();
|
|
|
|
}
|
|
|
|
|
|
|
|
// check2: transition check. assume it will be vanished on non-debug build.
|
|
|
|
switch (r->status_) {
|
|
|
|
case ractor_created:
|
|
|
|
VM_ASSERT(status == ractor_blocking);
|
|
|
|
break;
|
|
|
|
case ractor_running:
|
|
|
|
VM_ASSERT(status == ractor_blocking||
|
|
|
|
status == ractor_terminated);
|
|
|
|
break;
|
|
|
|
case ractor_blocking:
|
|
|
|
VM_ASSERT(status == ractor_running);
|
|
|
|
break;
|
|
|
|
case ractor_terminated:
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_bug("unreachable");
|
2020-03-10 02:22:11 +09:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
r->status_ = status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_status_p(rb_ractor_t *r, enum ractor_status status)
|
|
|
|
{
|
|
|
|
return rb_ractor_status_p(r, status);
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor data/mark/free
|
|
|
|
|
|
|
|
static struct rb_ractor_basket *ractor_queue_at(rb_ractor_t *r, struct rb_ractor_queue *rq, int i);
|
|
|
|
static void ractor_local_storage_mark(rb_ractor_t *r);
|
|
|
|
static void ractor_local_storage_free(rb_ractor_t *r);
|
2020-12-08 14:04:18 +09:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
|
|
|
ractor_queue_mark(struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
for (int i=0; i<rq->cnt; i++) {
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(NULL, rq, i);
|
2020-12-08 14:04:18 +09:00
|
|
|
rb_gc_mark(b->sender);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
switch (b->type.e) {
|
|
|
|
case basket_type_yielding:
|
|
|
|
case basket_type_take_basket:
|
|
|
|
case basket_type_deleted:
|
|
|
|
case basket_type_reserved:
|
|
|
|
// ignore
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
rb_gc_mark(b->p.send.v);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_mark(void *ptr)
|
|
|
|
{
|
|
|
|
rb_ractor_t *r = (rb_ractor_t *)ptr;
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_mark(&r->sync.recv_queue);
|
|
|
|
ractor_queue_mark(&r->sync.takers_queue);
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_gc_mark(r->loc);
|
|
|
|
rb_gc_mark(r->name);
|
|
|
|
rb_gc_mark(r->r_stdin);
|
|
|
|
rb_gc_mark(r->r_stdout);
|
|
|
|
rb_gc_mark(r->r_stderr);
|
2020-12-20 01:44:41 +09:00
|
|
|
rb_hook_list_mark(&r->pub.hooks);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
if (r->threads.cnt > 0) {
|
2020-09-04 11:46:50 +09:00
|
|
|
rb_thread_t *th = 0;
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_for_each(&r->threads.set, th, lt_node) {
|
2020-03-10 02:22:11 +09:00
|
|
|
VM_ASSERT(th != NULL);
|
|
|
|
rb_gc_mark(th->self);
|
|
|
|
}
|
|
|
|
}
|
2020-11-27 17:36:02 +09:00
|
|
|
|
2020-11-28 04:39:09 +09:00
|
|
|
ractor_local_storage_mark(r);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_queue_free(struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
free(rq->baskets);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_free(void *ptr)
|
|
|
|
{
|
|
|
|
rb_ractor_t *r = (rb_ractor_t *)ptr;
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("free r:%d", rb_ractor_id(r));
|
2020-12-08 00:42:20 +09:00
|
|
|
rb_native_mutex_destroy(&r->sync.lock);
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_free(&r->sync.recv_queue);
|
|
|
|
ractor_queue_free(&r->sync.takers_queue);
|
2020-11-28 04:39:09 +09:00
|
|
|
ractor_local_storage_free(r);
|
2020-12-20 01:44:41 +09:00
|
|
|
rb_hook_list_free(&r->pub.hooks);
|
2024-05-03 12:00:24 -04:00
|
|
|
|
2024-07-16 09:41:10 -04:00
|
|
|
if (r->newobj_cache) {
|
|
|
|
RUBY_ASSERT(r == ruby_single_main_ractor);
|
|
|
|
|
|
|
|
rb_gc_ractor_cache_free(r->newobj_cache);
|
|
|
|
r->newobj_cache = NULL;
|
|
|
|
}
|
2024-05-03 12:00:24 -04:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
ruby_xfree(r);
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
ractor_queue_memsize(const struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
return sizeof(struct rb_ractor_basket) * rq->size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
ractor_memsize(const void *ptr)
|
|
|
|
{
|
|
|
|
rb_ractor_t *r = (rb_ractor_t *)ptr;
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// TODO: more correct?
|
2020-03-10 02:22:11 +09:00
|
|
|
return sizeof(rb_ractor_t) +
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_memsize(&r->sync.recv_queue) +
|
|
|
|
ractor_queue_memsize(&r->sync.takers_queue);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static const rb_data_type_t ractor_data_type = {
|
|
|
|
"ractor",
|
|
|
|
{
|
|
|
|
ractor_mark,
|
|
|
|
ractor_free,
|
|
|
|
ractor_memsize,
|
|
|
|
NULL, // update
|
|
|
|
},
|
|
|
|
0, 0, RUBY_TYPED_FREE_IMMEDIATELY /* | RUBY_TYPED_WB_PROTECTED */
|
|
|
|
};
|
|
|
|
|
|
|
|
bool
|
|
|
|
rb_ractor_p(VALUE gv)
|
|
|
|
{
|
|
|
|
if (rb_typeddata_is_kind_of(gv, &ractor_data_type)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline rb_ractor_t *
|
|
|
|
RACTOR_PTR(VALUE self)
|
|
|
|
{
|
|
|
|
VM_ASSERT(rb_ractor_p(self));
|
|
|
|
rb_ractor_t *r = DATA_PTR(self);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2021-03-07 10:24:03 +09:00
|
|
|
static rb_atomic_t ractor_last_id;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2023-03-06 21:34:31 -08:00
|
|
|
uint32_t
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_ractor_current_id(void)
|
|
|
|
{
|
|
|
|
if (GET_THREAD()->ractor == NULL) {
|
|
|
|
return 1; // main ractor
|
|
|
|
}
|
|
|
|
else {
|
2020-12-20 01:44:41 +09:00
|
|
|
return rb_ractor_id(GET_RACTOR());
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor queue
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
|
|
|
ractor_queue_setup(struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
rq->size = 2;
|
|
|
|
rq->cnt = 0;
|
2020-10-07 17:51:19 +09:00
|
|
|
rq->start = 0;
|
2020-03-10 02:22:11 +09:00
|
|
|
rq->baskets = malloc(sizeof(struct rb_ractor_basket) * rq->size);
|
|
|
|
}
|
|
|
|
|
2020-12-08 14:04:18 +09:00
|
|
|
static struct rb_ractor_basket *
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_head(rb_ractor_t *r, struct rb_ractor_queue *rq)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
if (r != NULL) ASSERT_ractor_locking(r);
|
|
|
|
return &rq->baskets[rq->start];
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct rb_ractor_basket *
|
|
|
|
ractor_queue_at(rb_ractor_t *r, struct rb_ractor_queue *rq, int i)
|
|
|
|
{
|
|
|
|
if (r != NULL) ASSERT_ractor_locking(r);
|
2020-12-08 14:04:18 +09:00
|
|
|
return &rq->baskets[(rq->start + i) % rq->size];
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_advance(rb_ractor_t *r, struct rb_ractor_queue *rq)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
ASSERT_ractor_locking(r);
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
if (rq->reserved_cnt == 0) {
|
|
|
|
rq->cnt--;
|
|
|
|
rq->start = (rq->start + 1) % rq->size;
|
|
|
|
rq->serial++;
|
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_at(r, rq, 0)->type.e = basket_type_deleted;
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_skip_p(rb_ractor_t *r, struct rb_ractor_queue *rq, int i)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(r, rq, i);
|
|
|
|
return basket_type_p(b, basket_type_deleted) ||
|
|
|
|
basket_type_p(b, basket_type_reserved);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_queue_compact(rb_ractor_t *r, struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
while (rq->cnt > 0 && basket_type_p(ractor_queue_at(r, rq, 0), basket_type_deleted)) {
|
|
|
|
ractor_queue_advance(r, rq);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static bool
|
|
|
|
ractor_queue_empty_p(rb_ractor_t *r, struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
if (rq->cnt == 0) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
ractor_queue_compact(r, rq);
|
|
|
|
|
|
|
|
for (int i=0; i<rq->cnt; i++) {
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!ractor_queue_skip_p(r, rq, i)) {
|
2020-12-08 14:04:18 +09:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_deq(rb_ractor_t *r, struct rb_ractor_queue *rq, struct rb_ractor_basket *basket)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
ASSERT_ractor_locking(r);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
for (int i=0; i<rq->cnt; i++) {
|
|
|
|
if (!ractor_queue_skip_p(r, rq, i)) {
|
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(r, rq, i);
|
|
|
|
*basket = *b;
|
|
|
|
|
|
|
|
// remove from queue
|
|
|
|
b->type.e = basket_type_deleted;
|
|
|
|
ractor_queue_compact(r, rq);
|
|
|
|
return true;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
return false;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_enq(rb_ractor_t *r, struct rb_ractor_queue *rq, struct rb_ractor_basket *basket)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
|
|
|
|
if (rq->size <= rq->cnt) {
|
2020-10-07 17:51:19 +09:00
|
|
|
rq->baskets = realloc(rq->baskets, sizeof(struct rb_ractor_basket) * rq->size * 2);
|
|
|
|
for (int i=rq->size - rq->start; i<rq->cnt; i++) {
|
|
|
|
rq->baskets[i + rq->start] = rq->baskets[i + rq->start - rq->size];
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
rq->size *= 2;
|
|
|
|
}
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// copy basket into queue
|
2020-10-07 17:51:19 +09:00
|
|
|
rq->baskets[(rq->start + rq->cnt++) % rq->size] = *basket;
|
2021-10-03 16:22:53 +09:00
|
|
|
// fprintf(stderr, "%s %p->cnt:%d\n", RUBY_FUNCTION_NAME_STRING, (void *)rq, rq->cnt);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_delete(rb_ractor_t *r, struct rb_ractor_queue *rq, struct rb_ractor_basket *basket)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
basket->type.e = basket_type_deleted;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor basket
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
static VALUE ractor_reset_belonging(VALUE obj); // in this file
|
2020-10-31 00:40:04 +09:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2020-12-08 14:04:18 +09:00
|
|
|
ractor_basket_value(struct rb_ractor_basket *b)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
switch (b->type.e) {
|
2020-10-31 00:40:04 +09:00
|
|
|
case basket_type_ref:
|
2020-03-10 02:22:11 +09:00
|
|
|
break;
|
2020-10-31 00:40:04 +09:00
|
|
|
case basket_type_copy:
|
2020-03-10 02:22:11 +09:00
|
|
|
case basket_type_move:
|
2020-10-31 00:40:04 +09:00
|
|
|
case basket_type_will:
|
2023-02-24 18:46:17 +09:00
|
|
|
b->type.e = basket_type_ref;
|
|
|
|
b->p.send.v = ractor_reset_belonging(b->p.send.v);
|
2020-10-31 00:40:04 +09:00
|
|
|
break;
|
2020-03-10 02:22:11 +09:00
|
|
|
default:
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
2020-10-31 00:40:04 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
return b->p.send.v;
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_basket_accept(struct rb_ractor_basket *b)
|
|
|
|
{
|
|
|
|
VALUE v = ractor_basket_value(b);
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// a ractor's main thread had an error and yielded us this exception during its dying moments
|
2023-02-24 18:46:17 +09:00
|
|
|
if (b->p.send.exception) {
|
2020-10-31 00:40:04 +09:00
|
|
|
VALUE cause = v;
|
|
|
|
VALUE err = rb_exc_new_cstr(rb_eRactorRemoteError, "thrown by remote Ractor.");
|
|
|
|
rb_ivar_set(err, rb_intern("@ractor"), b->sender);
|
|
|
|
rb_ec_setup_exception(NULL, err, cause);
|
|
|
|
rb_exc_raise(err);
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
return v;
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor synchronizations
|
|
|
|
|
|
|
|
#if USE_RUBY_DEBUG_LOG
|
|
|
|
static const char *
|
|
|
|
wait_status_str(enum rb_ractor_wait_status wait_status)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
switch ((int)wait_status) {
|
|
|
|
case wait_none: return "none";
|
|
|
|
case wait_receiving: return "receiving";
|
|
|
|
case wait_taking: return "taking";
|
|
|
|
case wait_yielding: return "yielding";
|
|
|
|
case wait_receiving|wait_taking: return "receiving|taking";
|
|
|
|
case wait_receiving|wait_yielding: return "receiving|yielding";
|
|
|
|
case wait_taking|wait_yielding: return "taking|yielding";
|
|
|
|
case wait_receiving|wait_taking|wait_yielding: return "receiving|taking|yielding";
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_bug("unreachable");
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static const char *
|
|
|
|
wakeup_status_str(enum rb_ractor_wakeup_status wakeup_status)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
switch (wakeup_status) {
|
|
|
|
case wakeup_none: return "none";
|
|
|
|
case wakeup_by_send: return "by_send";
|
|
|
|
case wakeup_by_yield: return "by_yield";
|
|
|
|
case wakeup_by_take: return "by_take";
|
|
|
|
case wakeup_by_close: return "by_close";
|
|
|
|
case wakeup_by_interrupt: return "by_interrupt";
|
|
|
|
case wakeup_by_retry: return "by_retry";
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static const char *
|
|
|
|
basket_type_name(enum rb_ractor_basket_type type)
|
|
|
|
{
|
|
|
|
switch (type) {
|
|
|
|
case basket_type_none: return "none";
|
|
|
|
case basket_type_ref: return "ref";
|
|
|
|
case basket_type_copy: return "copy";
|
|
|
|
case basket_type_move: return "move";
|
|
|
|
case basket_type_will: return "will";
|
|
|
|
case basket_type_deleted: return "deleted";
|
|
|
|
case basket_type_reserved: return "reserved";
|
|
|
|
case basket_type_take_basket: return "take_basket";
|
2023-03-03 03:24:59 +09:00
|
|
|
case basket_type_yielding: return "yielding";
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
VM_ASSERT(0);
|
|
|
|
return NULL;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
#endif // USE_RUBY_DEBUG_LOG
|
2020-03-10 02:22:11 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
static rb_thread_t *
|
|
|
|
ractor_sleeping_by(const rb_ractor_t *r, rb_thread_t *th, enum rb_ractor_wait_status wait_status)
|
2021-01-22 02:48:31 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (th) {
|
|
|
|
if ((th->ractor_waiting.wait_status & wait_status) && th->ractor_waiting.wakeup_status == wakeup_none) {
|
|
|
|
return th;
|
|
|
|
}
|
2025-05-15 17:48:40 +09:00
|
|
|
}
|
|
|
|
else {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// find any thread that has this ractor wait status that is blocked
|
|
|
|
ccan_list_for_each(&r->sync.wait.waiting_threads, th, ractor_waiting.waiting_node) {
|
|
|
|
if ((th->ractor_waiting.wait_status & wait_status) && th->ractor_waiting.wakeup_status == wakeup_none) {
|
|
|
|
return th;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
2021-01-22 02:48:31 +09:00
|
|
|
}
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
#ifdef RUBY_THREAD_PTHREAD_H
|
|
|
|
// thread_*.c
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
void rb_ractor_sched_wakeup(rb_ractor_t *r, rb_thread_t *th);
|
2023-04-10 10:53:13 +09:00
|
|
|
#else
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// win32
|
2023-04-10 10:53:13 +09:00
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_ractor_sched_wakeup(rb_ractor_t *r, rb_thread_t *th)
|
2023-04-10 10:53:13 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
(void)r;
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
rb_native_cond_signal(&th->ractor_waiting.cond);
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
/*
|
|
|
|
* Wakeup `r` if the given `th` is blocked and has the given ractor `wait_status`.
|
|
|
|
* Wakeup any blocked thread in `r` with the given ractor `wait_status` if `th` is NULL.
|
|
|
|
*/
|
2021-01-22 02:48:31 +09:00
|
|
|
static bool
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(rb_ractor_t *r, rb_thread_t *th /* can be NULL */, enum rb_ractor_wait_status wait_status, enum rb_ractor_wakeup_status wakeup_status)
|
2021-01-22 02:48:31 +09:00
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
|
2025-05-23 13:53:00 -04:00
|
|
|
RUBY_DEBUG_LOG("r:%u wait:%s wakeup:%s",
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_id(r),
|
|
|
|
wait_status_str(wait_status),
|
|
|
|
wakeup_status_str(wakeup_status));
|
2021-01-22 02:48:31 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if ((th = ractor_sleeping_by(r, th, wait_status)) != NULL) {
|
|
|
|
th->ractor_waiting.wakeup_status = wakeup_status;
|
|
|
|
rb_ractor_sched_wakeup(r, th);
|
2021-01-22 02:48:31 +09:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// unblock function (UBF). This gets called when another thread on this or another ractor sets our thread's interrupt flag.
|
|
|
|
// This is not async-safe.
|
2023-04-10 10:53:13 +09:00
|
|
|
static void
|
|
|
|
ractor_sleep_interrupt(void *ptr)
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_execution_context_t *ec = ptr;
|
|
|
|
rb_ractor_t *r = rb_ec_ractor_ptr(ec);
|
|
|
|
rb_thread_t *th = rb_ec_thread_ptr(ec);
|
2023-04-10 10:53:13 +09:00
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(r, th, wait_receiving | wait_taking | wait_yielding, wakeup_by_interrupt);
|
2023-04-10 10:53:13 +09:00
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
}
|
|
|
|
|
|
|
|
typedef void (*ractor_sleep_cleanup_function)(rb_ractor_t *cr, void *p);
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// Checks the current thread for ruby interrupts and runs the cleanup function `cf_func` with `cf_data` if
|
|
|
|
// `rb_ec_check_ints` is going to raise. See the `rb_threadptr_execute_interrupts` for info on when it can raise.
|
2023-04-10 10:53:13 +09:00
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_check_ints(rb_execution_context_t *ec, rb_ractor_t *cr, rb_thread_t *cur_th, ractor_sleep_cleanup_function cf_func, void *cf_data)
|
2023-04-10 10:53:13 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (cur_th->ractor_waiting.wait_status != wait_none) {
|
|
|
|
enum rb_ractor_wait_status prev_wait_status = cur_th->ractor_waiting.wait_status;
|
|
|
|
cur_th->ractor_waiting.wait_status = wait_none;
|
|
|
|
cur_th->ractor_waiting.wakeup_status = wakeup_by_interrupt;
|
2023-04-10 10:53:13 +09:00
|
|
|
|
|
|
|
RACTOR_UNLOCK(cr);
|
|
|
|
{
|
|
|
|
if (cf_func) {
|
2023-08-08 00:32:45 +09:00
|
|
|
enum ruby_tag_type state;
|
2023-04-10 10:53:13 +09:00
|
|
|
EC_PUSH_TAG(ec);
|
|
|
|
if ((state = EC_EXEC_TAG()) == TAG_NONE) {
|
2024-11-06 02:31:29 +09:00
|
|
|
rb_ec_check_ints(ec);
|
2023-04-10 10:53:13 +09:00
|
|
|
}
|
|
|
|
EC_POP_TAG();
|
|
|
|
|
|
|
|
if (state) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
(*cf_func)(cr, cf_data); // cleanup function is run after the ubf, if it had ubf
|
2023-04-10 10:53:13 +09:00
|
|
|
EC_JUMP_TAG(ec, state);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
2024-11-06 02:31:29 +09:00
|
|
|
rb_ec_check_ints(ec);
|
2023-04-10 10:53:13 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
RACTOR_LOCK(cr);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
cur_th->ractor_waiting.wait_status = prev_wait_status;
|
2023-04-10 10:53:13 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef RUBY_THREAD_PTHREAD_H
|
|
|
|
void rb_ractor_sched_sleep(rb_execution_context_t *ec, rb_ractor_t *cr, rb_unblock_function_t *ubf);
|
|
|
|
#else
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_cond_wait(rb_ractor_t *r, rb_thread_t *th)
|
2023-04-10 10:53:13 +09:00
|
|
|
{
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
|
|
|
VALUE locked_by = r->sync.locked_by;
|
|
|
|
r->sync.locked_by = Qnil;
|
|
|
|
#endif
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_native_cond_wait(&th->ractor_waiting.cond, &r->sync.lock);
|
2023-04-10 10:53:13 +09:00
|
|
|
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
|
|
|
r->sync.locked_by = locked_by;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void *
|
|
|
|
ractor_sleep_wo_gvl(void *ptr)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = ptr;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_execution_context_t *ec = cr->threads.running_ec;
|
|
|
|
VM_ASSERT(GET_EC() == ec);
|
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2020-03-10 02:22:11 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
2020-12-17 00:31:14 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VM_ASSERT(cur_th->ractor_waiting.wait_status != wait_none);
|
|
|
|
// it's possible that another ractor has woken us up (ractor_wakeup),
|
|
|
|
// so check this condition
|
|
|
|
if (cur_th->ractor_waiting.wakeup_status == wakeup_none) {
|
|
|
|
cur_th->status = THREAD_STOPPED_FOREVER;
|
|
|
|
ractor_cond_wait(cr, cur_th);
|
|
|
|
cur_th->status = THREAD_RUNNABLE;
|
|
|
|
VM_ASSERT(cur_th->ractor_waiting.wakeup_status != wakeup_none);
|
2025-05-15 17:48:40 +09:00
|
|
|
}
|
|
|
|
else {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
RUBY_DEBUG_LOG("rare timing, no cond wait");
|
2020-12-17 00:31:14 +09:00
|
|
|
}
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
cur_th->ractor_waiting.wait_status = wait_none;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_ractor_sched_sleep(rb_execution_context_t *ec, rb_ractor_t *cr, rb_unblock_function_t *ubf_ractor_sleep_interrupt)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ASSERT_ractor_locking(cr);
|
|
|
|
rb_thread_t *th = rb_ec_thread_ptr(ec);
|
|
|
|
struct ccan_list_node *waitn = &th->ractor_waiting.waiting_node;
|
|
|
|
VM_ASSERT(waitn->next == waitn->prev && waitn->next == waitn); // it should be unlinked
|
|
|
|
ccan_list_add(&cr->sync.wait.waiting_threads, waitn);
|
2023-04-10 10:53:13 +09:00
|
|
|
RACTOR_UNLOCK(cr);
|
2021-01-22 02:48:31 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_nogvl(ractor_sleep_wo_gvl, cr, ubf_ractor_sleep_interrupt, ec, RB_NOGVL_INTR_FAIL);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-04-10 10:53:13 +09:00
|
|
|
RACTOR_LOCK(cr);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ccan_list_del_init(waitn);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-04-10 10:53:13 +09:00
|
|
|
#endif
|
2020-03-10 02:22:11 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
/*
|
|
|
|
* Sleep the current ractor's current thread until another ractor wakes us up or another thread calls our unblock function.
|
|
|
|
* The following ractor actions can cause this function to be called:
|
|
|
|
* Ractor#take (wait_taking)
|
|
|
|
* Ractor.yield (wait_yielding)
|
|
|
|
* Ractor.receive (wait_receiving)
|
|
|
|
* Ractor.select (can be a combination of the above wait states, depending on the states of the ractors passed to Ractor.select)
|
|
|
|
*/
|
2023-02-24 18:46:17 +09:00
|
|
|
static enum rb_ractor_wakeup_status
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep_with_cleanup(rb_execution_context_t *ec, rb_ractor_t *cr, rb_thread_t *cur_th, enum rb_ractor_wait_status wait_status,
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_sleep_cleanup_function cf_func, void *cf_data)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ASSERT_ractor_locking(cr);
|
2023-02-24 18:46:17 +09:00
|
|
|
enum rb_ractor_wakeup_status wakeup_status;
|
2020-03-10 02:22:11 +09:00
|
|
|
VM_ASSERT(GET_RACTOR() == cr);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VM_ASSERT(cur_th->ractor_waiting.wait_status == wait_none);
|
2023-02-24 18:46:17 +09:00
|
|
|
VM_ASSERT(wait_status != wait_none);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
cur_th->ractor_waiting.wait_status = wait_status;
|
|
|
|
cur_th->ractor_waiting.wakeup_status = wakeup_none;
|
2023-02-24 18:46:17 +09:00
|
|
|
|
2021-10-03 16:22:53 +09:00
|
|
|
// fprintf(stderr, "%s r:%p status:%s, wakeup_status:%s\n", RUBY_FUNCTION_NAME_STRING, (void *)cr,
|
2020-12-08 00:42:20 +09:00
|
|
|
// wait_status_str(cr->sync.wait.status), wakeup_status_str(cr->sync.wait.wakeup_status));
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("sleep by %s", wait_status_str(wait_status));
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
while (cur_th->ractor_waiting.wakeup_status == wakeup_none) {
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_ractor_sched_sleep(ec, cr, ractor_sleep_interrupt);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_check_ints(ec, cr, cur_th, cf_func, cf_data);
|
2020-12-17 00:31:14 +09:00
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
cur_th->ractor_waiting.wait_status = wait_none;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
wakeup_status = cur_th->ractor_waiting.wakeup_status;
|
|
|
|
cur_th->ractor_waiting.wakeup_status = wakeup_none;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("wakeup %s", wakeup_status_str(wakeup_status));
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ASSERT_ractor_locking(cr);
|
2023-02-24 18:46:17 +09:00
|
|
|
return wakeup_status;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static enum rb_ractor_wakeup_status
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep(rb_execution_context_t *ec, rb_ractor_t *cr, rb_thread_t *cur_th, enum rb_ractor_wait_status wait_status)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
return ractor_sleep_with_cleanup(ec, cr, cur_th, wait_status, 0, NULL);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor.receive
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_recursive_receive_if(rb_thread_t *th)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (th->ractor_waiting.receiving_mutex && rb_mutex_owned_p(th->ractor_waiting.receiving_mutex)) {
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_raise(rb_eRactorError, "can not call receive/receive_if recursively");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_try_receive(rb_execution_context_t *ec, rb_ractor_t *cr, struct rb_ractor_queue *rq)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket basket;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_recursive_receive_if(rb_ec_thread_ptr(ec));
|
2023-02-24 18:46:17 +09:00
|
|
|
bool received = false;
|
|
|
|
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
RUBY_DEBUG_LOG("rq->cnt:%d", rq->cnt);
|
|
|
|
received = ractor_queue_deq(cr, rq, &basket);
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!received) {
|
|
|
|
if (cr->sync.incoming_port_closed) {
|
|
|
|
rb_raise(rb_eRactorClosedError, "The incoming port is already closed");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
return Qundef;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
return ractor_basket_accept(&basket);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-08 14:04:18 +09:00
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_wait_receive(rb_execution_context_t *ec, rb_ractor_t *cr, struct rb_ractor_queue *rq)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
|
|
|
VM_ASSERT(cr == rb_ec_ractor_ptr(ec));
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
|
|
|
ractor_recursive_receive_if(cur_th);
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
RACTOR_LOCK(cr);
|
|
|
|
{
|
2024-11-05 11:49:21 +09:00
|
|
|
while (ractor_queue_empty_p(cr, rq) && !cr->sync.incoming_port_closed) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep(ec, cr, cur_th, wait_receiving);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(cr);
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2020-12-08 14:04:18 +09:00
|
|
|
ractor_receive(rb_execution_context_t *ec, rb_ractor_t *cr)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2020-12-08 14:04:18 +09:00
|
|
|
VM_ASSERT(cr == rb_ec_ractor_ptr(ec));
|
2020-03-10 02:22:11 +09:00
|
|
|
VALUE v;
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_queue *rq = &cr->sync.recv_queue;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
while (UNDEF_P(v = ractor_try_receive(ec, cr, rq))) {
|
|
|
|
ractor_wait_receive(ec, cr, rq);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
return v;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if 0
|
|
|
|
static void
|
|
|
|
rq_dump(struct rb_ractor_queue *rq)
|
|
|
|
{
|
|
|
|
bool bug = false;
|
|
|
|
for (int i=0; i<rq->cnt; i++) {
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(NULL, rq, i);
|
2021-10-03 12:02:58 +09:00
|
|
|
fprintf(stderr, "%d (start:%d) type:%s %p %s\n", i, rq->start, basket_type_name(b->type),
|
|
|
|
(void *)b, RSTRING_PTR(RARRAY_AREF(b->v, 1)));
|
2023-02-24 18:46:17 +09:00
|
|
|
if (basket_type_p(b, basket_type_reserved) bug = true;
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
if (bug) rb_bug("!!");
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct receive_block_data {
|
|
|
|
rb_ractor_t *cr;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *th;
|
2020-12-08 14:04:18 +09:00
|
|
|
struct rb_ractor_queue *rq;
|
|
|
|
VALUE v;
|
|
|
|
int index;
|
|
|
|
bool success;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_receive_if_lock(rb_thread_t *th)
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VALUE m = th->ractor_waiting.receiving_mutex;
|
2020-12-08 14:04:18 +09:00
|
|
|
if (m == Qfalse) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
m = th->ractor_waiting.receiving_mutex = rb_mutex_new();
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
rb_mutex_lock(m);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
receive_if_body(VALUE ptr)
|
|
|
|
{
|
|
|
|
struct receive_block_data *data = (struct receive_block_data *)ptr;
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_receive_if_lock(data->th);
|
2020-12-08 14:04:18 +09:00
|
|
|
VALUE block_result = rb_yield(data->v);
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_t *cr = data->cr;
|
2020-12-08 14:04:18 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
2020-12-08 14:04:18 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(cr, data->rq, data->index);
|
|
|
|
VM_ASSERT(basket_type_p(b, basket_type_reserved));
|
2020-12-08 14:04:18 +09:00
|
|
|
data->rq->reserved_cnt--;
|
|
|
|
|
|
|
|
if (RTEST(block_result)) {
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_delete(cr, data->rq, b);
|
|
|
|
ractor_queue_compact(cr, data->rq);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
b->type.e = basket_type_ref;
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
data->success = true;
|
|
|
|
|
|
|
|
if (RTEST(block_result)) {
|
|
|
|
return data->v;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return Qundef;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
receive_if_ensure(VALUE v)
|
|
|
|
{
|
|
|
|
struct receive_block_data *data = (struct receive_block_data *)v;
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_t *cr = data->cr;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = data->th;
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
if (!data->success) {
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(cr, data->rq, data->index);
|
|
|
|
VM_ASSERT(basket_type_p(b, basket_type_reserved));
|
|
|
|
b->type.e = basket_type_deleted;
|
2020-12-08 14:04:18 +09:00
|
|
|
data->rq->reserved_cnt--;
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
2020-12-08 14:04:18 +09:00
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_mutex_unlock(cur_th->ractor_waiting.receiving_mutex);
|
2020-12-08 14:04:18 +09:00
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_receive_if(rb_execution_context_t *ec, VALUE crv, VALUE b)
|
|
|
|
{
|
|
|
|
if (!RTEST(b)) rb_raise(rb_eArgError, "no block given");
|
|
|
|
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2020-12-08 14:04:18 +09:00
|
|
|
unsigned int serial = (unsigned int)-1;
|
|
|
|
int index = 0;
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_queue *rq = &cr->sync.recv_queue;
|
2020-12-08 14:04:18 +09:00
|
|
|
|
|
|
|
while (1) {
|
|
|
|
VALUE v = Qundef;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_wait_receive(ec, cr, rq);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2020-12-08 14:04:18 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
if (serial != rq->serial) {
|
|
|
|
serial = rq->serial;
|
|
|
|
index = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
// check newer version
|
|
|
|
for (int i=index; i<rq->cnt; i++) {
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!ractor_queue_skip_p(cr, rq, i)) {
|
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(cr, rq, i);
|
2020-12-08 14:04:18 +09:00
|
|
|
v = ractor_basket_value(b);
|
2023-02-24 18:46:17 +09:00
|
|
|
b->type.e = basket_type_reserved;
|
2020-12-08 14:04:18 +09:00
|
|
|
rq->reserved_cnt++;
|
|
|
|
index = i;
|
|
|
|
break;
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
2020-12-08 14:04:18 +09:00
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
|
2022-11-15 13:24:08 +09:00
|
|
|
if (!UNDEF_P(v)) {
|
2020-12-08 14:04:18 +09:00
|
|
|
struct receive_block_data data = {
|
|
|
|
.cr = cr,
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
.th = cur_th,
|
2020-12-08 14:04:18 +09:00
|
|
|
.rq = rq,
|
|
|
|
.v = v,
|
|
|
|
.index = index,
|
|
|
|
.success = false,
|
|
|
|
};
|
|
|
|
|
|
|
|
VALUE result = rb_ensure(receive_if_body, (VALUE)&data,
|
|
|
|
receive_if_ensure, (VALUE)&data);
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!UNDEF_P(result)) return result;
|
|
|
|
index++;
|
|
|
|
}
|
|
|
|
|
|
|
|
RUBY_VM_CHECK_INTS(ec);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_send_basket(rb_execution_context_t *ec, rb_ractor_t *r, struct rb_ractor_basket *b)
|
|
|
|
{
|
|
|
|
bool closed = false;
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
if (r->sync.incoming_port_closed) {
|
|
|
|
closed = true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
ractor_queue_enq(r, &r->sync.recv_queue, b);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// wakeup any receiving thread in `r`
|
|
|
|
ractor_wakeup(r, NULL, wait_receiving, wakeup_by_send);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
if (closed) {
|
|
|
|
rb_raise(rb_eRactorClosedError, "The incoming-port is already closed");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor#send
|
|
|
|
|
|
|
|
static VALUE ractor_move(VALUE obj); // in this file
|
|
|
|
static VALUE ractor_copy(VALUE obj); // in this file
|
|
|
|
|
|
|
|
static void
|
2023-03-19 21:57:22 +09:00
|
|
|
ractor_basket_prepare_contents(VALUE obj, VALUE move, volatile VALUE *pobj, enum rb_ractor_basket_type *ptype)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
VALUE v;
|
|
|
|
enum rb_ractor_basket_type type;
|
|
|
|
|
|
|
|
if (rb_ractor_shareable_p(obj)) {
|
|
|
|
type = basket_type_ref;
|
|
|
|
v = obj;
|
|
|
|
}
|
|
|
|
else if (!RTEST(move)) {
|
|
|
|
v = ractor_copy(obj);
|
|
|
|
type = basket_type_copy;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
type = basket_type_move;
|
|
|
|
v = ractor_move(obj);
|
|
|
|
}
|
|
|
|
|
|
|
|
*pobj = v;
|
|
|
|
*ptype = type;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_(rb_ractor_t *cr, rb_thread_t *cur_th, struct rb_ractor_basket *basket, VALUE obj, bool exc)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
VM_ASSERT(cr == GET_RACTOR());
|
|
|
|
|
|
|
|
basket->sender = cr->pub.self;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
basket->sending_th = cur_th;
|
2023-02-24 18:46:17 +09:00
|
|
|
basket->p.send.exception = exc;
|
|
|
|
basket->p.send.v = obj;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill(rb_ractor_t *cr, rb_thread_t *cur_th, struct rb_ractor_basket *basket, VALUE obj, VALUE move, bool exc)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
VALUE v;
|
|
|
|
enum rb_ractor_basket_type type;
|
|
|
|
ractor_basket_prepare_contents(obj, move, &v, &type);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_(cr, cur_th, basket, v, exc);
|
2023-02-24 18:46:17 +09:00
|
|
|
basket->type.e = type;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_will(rb_ractor_t *cr, rb_thread_t *cur_th, struct rb_ractor_basket *basket, VALUE obj, bool exc)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_(cr, cur_th, basket, obj, exc);
|
2023-02-24 18:46:17 +09:00
|
|
|
basket->type.e = basket_type_will;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_send(rb_execution_context_t *ec, rb_ractor_t *recv_r, VALUE obj, VALUE move)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
struct rb_ractor_basket basket;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2023-02-24 18:46:17 +09:00
|
|
|
// TODO: Ractor local GC
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill(cr, cur_th, &basket, obj, move, false);
|
|
|
|
ractor_send_basket(ec, recv_r, &basket);
|
|
|
|
return recv_r->pub.self;
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor#take
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_take_has_will(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
|
|
|
|
return basket_type_p(&r->sync.will_basket, basket_type_will);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_take_will(rb_ractor_t *r, struct rb_ractor_basket *b)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(r);
|
|
|
|
|
|
|
|
if (ractor_take_has_will(r)) {
|
|
|
|
*b = r->sync.will_basket;
|
|
|
|
r->sync.will_basket.type.e = basket_type_none;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
VM_ASSERT(basket_type_p(&r->sync.will_basket, basket_type_none));
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_take_will_lock(rb_ractor_t *r, struct rb_ractor_basket *b)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_unlocking(r);
|
|
|
|
bool taken;
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
2023-03-02 18:27:44 +09:00
|
|
|
taken = ractor_take_will(r, b);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
return taken;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_register_take(rb_ractor_t *cr, rb_thread_t *cur_th, rb_ractor_t *r, struct rb_ractor_basket *take_basket,
|
2023-02-24 18:46:17 +09:00
|
|
|
bool is_take, struct rb_ractor_selector_take_config *config, bool ignore_error)
|
|
|
|
{
|
|
|
|
struct rb_ractor_basket b = {
|
|
|
|
.type.e = basket_type_take_basket,
|
|
|
|
.sender = cr->pub.self,
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
.sending_th = cur_th,
|
2023-02-24 18:46:17 +09:00
|
|
|
.p = {
|
|
|
|
.take = {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
.basket = take_basket, // pointer to our stack value saved in ractor `r` queue
|
2023-02-24 18:46:17 +09:00
|
|
|
.config = config,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
};
|
|
|
|
bool closed = false;
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
if (is_take && ractor_take_will(r, take_basket)) {
|
|
|
|
RUBY_DEBUG_LOG("take over a will of r:%d", rb_ractor_id(r));
|
|
|
|
}
|
|
|
|
else if (!is_take && ractor_take_has_will(r)) {
|
|
|
|
RUBY_DEBUG_LOG("has_will");
|
2023-03-02 18:27:44 +09:00
|
|
|
VM_ASSERT(config != NULL);
|
|
|
|
config->closed = true;
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
else if (r->sync.outgoing_port_closed) {
|
|
|
|
closed = true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
RUBY_DEBUG_LOG("register in r:%d", rb_ractor_id(r));
|
|
|
|
ractor_queue_enq(r, &r->sync.takers_queue, &b);
|
|
|
|
|
|
|
|
if (basket_none_p(take_basket)) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// wakeup any thread in `r` that has yielded, if there is any.
|
|
|
|
ractor_wakeup(r, NULL, wait_yielding, wakeup_by_take);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
if (closed) {
|
|
|
|
if (!ignore_error) rb_raise(rb_eRactorClosedError, "The outgoing-port is already closed");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_deregister_take(rb_ractor_t *r, struct rb_ractor_basket *take_basket)
|
|
|
|
{
|
|
|
|
struct rb_ractor_queue *ts = &r->sync.takers_queue;
|
|
|
|
bool deleted = false;
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
if (r->sync.outgoing_port_closed) {
|
|
|
|
// ok
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
for (int i=0; i<ts->cnt; i++) {
|
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(r, ts, i);
|
|
|
|
if (basket_type_p(b, basket_type_take_basket) && b->p.take.basket == take_basket) {
|
|
|
|
ractor_queue_delete(r, ts, b);
|
|
|
|
deleted = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (deleted) {
|
|
|
|
ractor_queue_compact(r, ts);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
return deleted;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_try_take(rb_ractor_t *cr, rb_thread_t *cur_th, rb_ractor_t *recv_r, struct rb_ractor_basket *take_basket)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
bool taken;
|
|
|
|
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// If it hasn't yielded yet or is currently in the process of yielding, sleep more
|
2023-02-24 18:46:17 +09:00
|
|
|
if (basket_none_p(take_basket) || basket_type_p(take_basket, basket_type_yielding)) {
|
|
|
|
taken = false;
|
|
|
|
}
|
|
|
|
else {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
taken = true; // basket type might be, for ex, basket_type_copy if value was copied during yield
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
|
|
|
|
if (taken) {
|
|
|
|
RUBY_DEBUG_LOG("taken");
|
|
|
|
if (basket_type_p(take_basket, basket_type_deleted)) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VM_ASSERT(recv_r->sync.outgoing_port_closed);
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_raise(rb_eRactorClosedError, "The outgoing-port is already closed");
|
|
|
|
}
|
|
|
|
return ractor_basket_accept(take_basket);
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
RUBY_DEBUG_LOG("not taken");
|
|
|
|
return Qundef;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#if VM_CHECK_MODE > 0
|
|
|
|
static bool
|
|
|
|
ractor_check_specific_take_basket_lock(rb_ractor_t *r, struct rb_ractor_basket *tb)
|
|
|
|
{
|
|
|
|
bool ret = false;
|
|
|
|
struct rb_ractor_queue *ts = &r->sync.takers_queue;
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
for (int i=0; i<ts->cnt; i++) {
|
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(r, ts, i);
|
|
|
|
if (basket_type_p(b, basket_type_take_basket) && b->p.take.basket == tb) {
|
|
|
|
ret = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// cleanup function, cr is unlocked
|
2023-02-24 18:46:17 +09:00
|
|
|
static void
|
|
|
|
ractor_take_cleanup(rb_ractor_t *cr, rb_ractor_t *r, struct rb_ractor_basket *tb)
|
|
|
|
{
|
|
|
|
retry:
|
|
|
|
if (basket_none_p(tb)) { // not yielded yet
|
|
|
|
if (!ractor_deregister_take(r, tb)) {
|
|
|
|
// not in r's takers queue
|
|
|
|
rb_thread_sleep(0);
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
VM_ASSERT(!ractor_check_specific_take_basket_lock(r, tb));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
struct take_wait_take_cleanup_data {
|
|
|
|
rb_ractor_t *r;
|
|
|
|
struct rb_ractor_basket *tb;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_wait_take_cleanup(rb_ractor_t *cr, void *ptr)
|
|
|
|
{
|
|
|
|
struct take_wait_take_cleanup_data *data = (struct take_wait_take_cleanup_data *)ptr;
|
|
|
|
ractor_take_cleanup(cr, data->r, data->tb);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wait_take(rb_execution_context_t *ec, rb_ractor_t *cr, rb_thread_t *cur_th, rb_ractor_t *r, struct rb_ractor_basket *take_basket)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
struct take_wait_take_cleanup_data data = {
|
|
|
|
.r = r,
|
|
|
|
.tb = take_basket,
|
|
|
|
};
|
|
|
|
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
if (basket_none_p(take_basket) || basket_type_p(take_basket, basket_type_yielding)) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep_with_cleanup(ec, cr, cur_th, wait_taking, ractor_wait_take_cleanup, &data);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_take(rb_execution_context_t *ec, rb_ractor_t *recv_r)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
RUBY_DEBUG_LOG("from r:%u", rb_ractor_id(recv_r));
|
2023-02-24 18:46:17 +09:00
|
|
|
VALUE v;
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
struct rb_ractor_basket take_basket = {
|
|
|
|
.type.e = basket_type_none,
|
|
|
|
.sender = 0,
|
|
|
|
};
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_register_take(cr, cur_th, recv_r, &take_basket, true, NULL, false);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
while (UNDEF_P(v = ractor_try_take(cr, cur_th, recv_r, &take_basket))) {
|
|
|
|
ractor_wait_take(ec, cr, cur_th, recv_r, &take_basket);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VM_ASSERT(!basket_none_p(&take_basket)); // might be, for ex, basket_type_copy
|
|
|
|
VM_ASSERT(!ractor_check_specific_take_basket_lock(recv_r, &take_basket));
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
return v;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor.yield
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_check_take_basket(rb_ractor_t *cr, struct rb_ractor_queue *rs)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_locking(cr);
|
|
|
|
|
|
|
|
for (int i=0; i<rs->cnt; i++) {
|
|
|
|
struct rb_ractor_basket *b = ractor_queue_at(cr, rs, i);
|
|
|
|
if (basket_type_p(b, basket_type_take_basket) &&
|
|
|
|
basket_none_p(b->p.take.basket)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// Find another ractor that is taking from this ractor, so we can yield to it
|
2023-02-24 18:46:17 +09:00
|
|
|
static bool
|
|
|
|
ractor_deq_take_basket(rb_ractor_t *cr, struct rb_ractor_queue *rs, struct rb_ractor_basket *b)
|
|
|
|
{
|
|
|
|
ASSERT_ractor_unlocking(cr);
|
|
|
|
struct rb_ractor_basket *first_tb = NULL;
|
|
|
|
bool found = false;
|
|
|
|
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
while (ractor_queue_deq(cr, rs, b)) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (basket_type_p(b, basket_type_take_basket)) { // some other ractor is taking
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *tb = b->p.take.basket;
|
|
|
|
|
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_none, basket_type_yielding) == basket_type_none) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
found = true; // payload basket is now "yielding" type
|
2023-02-24 18:46:17 +09:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
ractor_queue_enq(cr, rs, b);
|
|
|
|
if (first_tb == NULL) first_tb = tb;
|
|
|
|
struct rb_ractor_basket *head = ractor_queue_head(cr, rs);
|
|
|
|
VM_ASSERT(head != NULL);
|
|
|
|
if (basket_type_p(head, basket_type_take_basket) && head->p.take.basket == first_tb) {
|
|
|
|
break; // loop detected
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
VM_ASSERT(basket_none_p(b));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (found && b->p.take.config && !b->p.take.config->oneshot) {
|
|
|
|
ractor_queue_enq(cr, rs, b);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
|
|
|
|
return found;
|
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// Try yielding to a taking ractor
|
2023-02-24 18:46:17 +09:00
|
|
|
static bool
|
2023-03-19 21:57:22 +09:00
|
|
|
ractor_try_yield(rb_execution_context_t *ec, rb_ractor_t *cr, struct rb_ractor_queue *ts, volatile VALUE obj, VALUE move, bool exc, bool is_will)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// Don't lock yielding ractor at same time as taking ractor. This could deadlock due to timing
|
|
|
|
// issue because we don't have a lock hierarchy.
|
2023-02-24 18:46:17 +09:00
|
|
|
ASSERT_ractor_unlocking(cr);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
struct rb_ractor_basket b;
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (ractor_deq_take_basket(cr, ts, &b)) { // deq a take basket from takers queue of `cr` into `b`
|
2023-02-24 18:46:17 +09:00
|
|
|
VM_ASSERT(basket_type_p(&b, basket_type_take_basket));
|
|
|
|
VM_ASSERT(basket_type_p(b.p.take.basket, basket_type_yielding));
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_ractor_t *tr = RACTOR_PTR(b.sender); // taking ractor
|
|
|
|
rb_thread_t *tr_th = b.sending_th; // taking thread
|
|
|
|
struct rb_ractor_basket *tb = b.p.take.basket; // payload basket
|
2023-02-24 18:46:17 +09:00
|
|
|
enum rb_ractor_basket_type type;
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG("basket from r:%u", rb_ractor_id(tr));
|
|
|
|
|
|
|
|
if (is_will) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
type = basket_type_will; // last message
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-08-08 00:32:45 +09:00
|
|
|
enum ruby_tag_type state;
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
// begin
|
|
|
|
EC_PUSH_TAG(ec);
|
|
|
|
if ((state = EC_EXEC_TAG()) == TAG_NONE) {
|
|
|
|
// TODO: Ractor local GC
|
|
|
|
ractor_basket_prepare_contents(obj, move, &obj, &type);
|
|
|
|
}
|
|
|
|
EC_POP_TAG();
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// rescue ractor copy/move error, then re-raise
|
2023-02-24 18:46:17 +09:00
|
|
|
if (state) {
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
b.p.take.basket->type.e = basket_type_none;
|
|
|
|
ractor_queue_enq(cr, ts, &b);
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
EC_JUMP_TAG(ec, state);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
RACTOR_LOCK(tr);
|
|
|
|
{
|
|
|
|
VM_ASSERT(basket_type_p(tb, basket_type_yielding));
|
|
|
|
// fill atomic
|
|
|
|
RUBY_DEBUG_LOG("fill %sbasket from r:%u", is_will ? "will " : "", rb_ractor_id(tr));
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_(cr, cur_th, tb, obj, exc); // fill the take basket payload
|
2023-02-24 18:46:17 +09:00
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_yielding, type) != basket_type_yielding) {
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(tr, tr_th, wait_taking, wakeup_by_yield);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(tr);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
2024-11-05 11:49:21 +09:00
|
|
|
else if (cr->sync.outgoing_port_closed) {
|
|
|
|
rb_raise(rb_eRactorClosedError, "The outgoing-port is already closed");
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
else {
|
|
|
|
RUBY_DEBUG_LOG("no take basket");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_wait_yield(rb_execution_context_t *ec, rb_ractor_t *cr, struct rb_ractor_queue *ts)
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
2024-11-05 11:49:21 +09:00
|
|
|
while (!ractor_check_take_basket(cr, ts) && !cr->sync.outgoing_port_closed) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep(ec, cr, cur_th, wait_yielding);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// In order to yield, we wait until our takers queue has at least one element. Then, we wakeup a taker.
|
2023-02-24 18:46:17 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_yield(rb_execution_context_t *ec, rb_ractor_t *cr, VALUE obj, VALUE move)
|
|
|
|
{
|
|
|
|
struct rb_ractor_queue *ts = &cr->sync.takers_queue;
|
|
|
|
|
|
|
|
while (!ractor_try_yield(ec, cr, ts, obj, move, false, false)) {
|
|
|
|
ractor_wait_yield(ec, cr, ts);
|
|
|
|
}
|
|
|
|
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor::Selector
|
2022-03-28 17:00:45 -04:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector {
|
|
|
|
rb_ractor_t *r;
|
|
|
|
struct rb_ractor_basket take_basket;
|
|
|
|
st_table *take_ractors; // rb_ractor_t * => (struct rb_ractor_selector_take_config *)
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
ractor_selector_mark_ractors_i(st_data_t key, st_data_t value, st_data_t data)
|
|
|
|
{
|
|
|
|
const rb_ractor_t *r = (rb_ractor_t *)key;
|
|
|
|
rb_gc_mark(r->pub.self);
|
|
|
|
return ST_CONTINUE;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_selector_mark(void *ptr)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector *s = ptr;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (s->take_ractors) {
|
|
|
|
st_foreach(s->take_ractors, ractor_selector_mark_ractors_i, 0);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
switch (s->take_basket.type.e) {
|
|
|
|
case basket_type_ref:
|
|
|
|
case basket_type_copy:
|
|
|
|
case basket_type_move:
|
|
|
|
case basket_type_will:
|
|
|
|
rb_gc_mark(s->take_basket.sender);
|
|
|
|
rb_gc_mark(s->take_basket.p.send.v);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static int
|
|
|
|
ractor_selector_release_i(st_data_t key, st_data_t val, st_data_t data)
|
|
|
|
{
|
|
|
|
struct rb_ractor_selector *s = (struct rb_ractor_selector *)data;
|
|
|
|
struct rb_ractor_selector_take_config *config = (struct rb_ractor_selector_take_config *)val;
|
|
|
|
|
|
|
|
if (!config->closed) {
|
|
|
|
ractor_deregister_take((rb_ractor_t *)key, &s->take_basket);
|
|
|
|
}
|
|
|
|
free(config);
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
2020-11-01 09:56:40 +09:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_selector_free(void *ptr)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector *s = ptr;
|
|
|
|
st_foreach(s->take_ractors, ractor_selector_release_i, (st_data_t)s);
|
|
|
|
st_free_table(s->take_ractors);
|
|
|
|
ruby_xfree(ptr);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static size_t
|
|
|
|
ractor_selector_memsize(const void *ptr)
|
|
|
|
{
|
|
|
|
const struct rb_ractor_selector *s = ptr;
|
|
|
|
return sizeof(struct rb_ractor_selector) +
|
|
|
|
st_memsize(s->take_ractors) +
|
|
|
|
s->take_ractors->num_entries * sizeof(struct rb_ractor_selector_take_config);
|
|
|
|
}
|
2021-01-22 04:38:50 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static const rb_data_type_t ractor_selector_data_type = {
|
|
|
|
"ractor/selector",
|
|
|
|
{
|
|
|
|
ractor_selector_mark,
|
|
|
|
ractor_selector_free,
|
|
|
|
ractor_selector_memsize,
|
|
|
|
NULL, // update
|
|
|
|
},
|
|
|
|
0, 0, RUBY_TYPED_FREE_IMMEDIATELY,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct rb_ractor_selector *
|
|
|
|
RACTOR_SELECTOR_PTR(VALUE selv)
|
|
|
|
{
|
|
|
|
VM_ASSERT(rb_typeddata_is_kind_of(selv, &ractor_selector_data_type));
|
|
|
|
|
|
|
|
return (struct rb_ractor_selector *)DATA_PTR(selv);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor::Selector.new
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_create(VALUE klass)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector *s;
|
2023-12-15 18:25:12 +09:00
|
|
|
VALUE selv = TypedData_Make_Struct(klass, struct rb_ractor_selector, &ractor_selector_data_type, s);
|
2023-02-24 18:46:17 +09:00
|
|
|
s->take_basket.type.e = basket_type_reserved;
|
|
|
|
s->take_ractors = st_init_numtable(); // ractor (ptr) -> take_config
|
|
|
|
return selv;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor::Selector#add(r)
|
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/*
|
|
|
|
* call-seq:
|
|
|
|
* add(ractor) -> ractor
|
|
|
|
*
|
|
|
|
* Adds _ractor_ to +self+. Raises an exception if _ractor_ is already added.
|
|
|
|
* Returns _ractor_.
|
|
|
|
*/
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_add(VALUE selv, VALUE rv)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!rb_ractor_p(rv)) {
|
|
|
|
rb_raise(rb_eArgError, "Not a ractor object");
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_t *r = RACTOR_PTR(rv);
|
|
|
|
struct rb_ractor_selector *s = RACTOR_SELECTOR_PTR(selv);
|
2021-01-22 04:38:50 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (st_lookup(s->take_ractors, (st_data_t)r, NULL)) {
|
|
|
|
rb_raise(rb_eArgError, "already added");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector_take_config *config = malloc(sizeof(struct rb_ractor_selector_take_config));
|
|
|
|
VM_ASSERT(config != NULL);
|
|
|
|
config->closed = false;
|
|
|
|
config->oneshot = false;
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (ractor_register_take(GET_RACTOR(), GET_THREAD(), r, &s->take_basket, false, config, true)) {
|
2023-02-24 18:46:17 +09:00
|
|
|
st_insert(s->take_ractors, (st_data_t)r, (st_data_t)config);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
return rv;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor::Selector#remove(r)
|
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/* call-seq:
|
|
|
|
* remove(ractor) -> ractor
|
|
|
|
*
|
|
|
|
* Removes _ractor_ from +self+. Raises an exception if _ractor_ is not added.
|
|
|
|
* Returns the removed _ractor_.
|
|
|
|
*/
|
2021-01-22 04:38:50 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_remove(VALUE selv, VALUE rv)
|
2021-01-22 04:38:50 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!rb_ractor_p(rv)) {
|
|
|
|
rb_raise(rb_eArgError, "Not a ractor object");
|
|
|
|
}
|
|
|
|
|
|
|
|
rb_ractor_t *r = RACTOR_PTR(rv);
|
|
|
|
struct rb_ractor_selector *s = RACTOR_SELECTOR_PTR(selv);
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG("r:%u", rb_ractor_id(r));
|
|
|
|
|
|
|
|
if (!st_lookup(s->take_ractors, (st_data_t)r, NULL)) {
|
|
|
|
rb_raise(rb_eArgError, "not added yet");
|
|
|
|
}
|
|
|
|
|
|
|
|
ractor_deregister_take(r, &s->take_basket);
|
|
|
|
struct rb_ractor_selector_take_config *config;
|
|
|
|
st_delete(s->take_ractors, (st_data_t *)&r, (st_data_t *)&config);
|
|
|
|
free(config);
|
|
|
|
|
|
|
|
return rv;
|
2021-01-22 04:38:50 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor::Selector#clear
|
|
|
|
|
|
|
|
struct ractor_selector_clear_data {
|
|
|
|
VALUE selv;
|
|
|
|
rb_execution_context_t *ec;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
ractor_selector_clear_i(st_data_t key, st_data_t val, st_data_t data)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-12-15 18:25:12 +09:00
|
|
|
VALUE selv = (VALUE)data;
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_t *r = (rb_ractor_t *)key;
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_remove(selv, r->pub.self);
|
2023-02-24 18:46:17 +09:00
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/*
|
|
|
|
* call-seq:
|
|
|
|
* clear -> self
|
|
|
|
*
|
|
|
|
* Removes all ractors from +self+. Raises +self+.
|
|
|
|
*/
|
2023-02-24 18:46:17 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_clear(VALUE selv)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
struct rb_ractor_selector *s = RACTOR_SELECTOR_PTR(selv);
|
2020-09-25 11:39:15 +09:00
|
|
|
|
2023-12-15 18:25:12 +09:00
|
|
|
st_foreach(s->take_ractors, ractor_selector_clear_i, (st_data_t)selv);
|
2023-02-24 18:46:17 +09:00
|
|
|
st_clear(s->take_ractors);
|
|
|
|
return selv;
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/*
|
|
|
|
* call-seq:
|
|
|
|
* empty? -> true or false
|
|
|
|
*
|
|
|
|
* Returns +true+ if no ractor is added.
|
|
|
|
*/
|
2023-03-02 18:27:44 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_empty_p(VALUE selv)
|
2023-03-02 18:27:44 +09:00
|
|
|
{
|
|
|
|
struct rb_ractor_selector *s = RACTOR_SELECTOR_PTR(selv);
|
|
|
|
return s->take_ractors->num_entries == 0 ? Qtrue : Qfalse;
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
static int
|
|
|
|
ractor_selector_wait_i(st_data_t key, st_data_t val, st_data_t dat)
|
|
|
|
{
|
|
|
|
rb_ractor_t *r = (rb_ractor_t *)key;
|
|
|
|
struct rb_ractor_basket *tb = (struct rb_ractor_basket *)dat;
|
|
|
|
int ret;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!basket_none_p(tb)) {
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("already taken:%s", basket_type_name(tb->type.e));
|
2023-02-24 18:46:17 +09:00
|
|
|
return ST_STOP;
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
if (basket_type_p(&r->sync.will_basket, basket_type_will)) {
|
|
|
|
RUBY_DEBUG_LOG("r:%u has will", rb_ractor_id(r));
|
2021-01-22 04:38:50 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_none, basket_type_will) == basket_type_none) {
|
|
|
|
ractor_take_will(r, tb);
|
|
|
|
ret = ST_STOP;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("has will, but already taken (%s)", basket_type_name(tb->type.e));
|
2023-02-24 18:46:17 +09:00
|
|
|
ret = ST_CONTINUE;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
else if (r->sync.outgoing_port_closed) {
|
|
|
|
RUBY_DEBUG_LOG("r:%u is closed", rb_ractor_id(r));
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_none, basket_type_deleted) == basket_type_none) {
|
|
|
|
tb->sender = r->pub.self;
|
|
|
|
ret = ST_STOP;
|
|
|
|
}
|
|
|
|
else {
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("closed, but already taken (%s)", basket_type_name(tb->type.e));
|
2023-02-24 18:46:17 +09:00
|
|
|
ret = ST_CONTINUE;
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("wakeup r:%u", rb_ractor_id(r));
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(r, NULL, wait_yielding, wakeup_by_take);
|
2023-02-24 18:46:17 +09:00
|
|
|
ret = ST_CONTINUE;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor::Selector#wait
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
// cleanup function, cr is unlocked
|
2023-02-24 18:46:17 +09:00
|
|
|
static void
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_selector_wait_cleanup(rb_ractor_t *cr, void *ptr)
|
2023-02-24 18:46:17 +09:00
|
|
|
{
|
|
|
|
struct rb_ractor_basket *tb = (struct rb_ractor_basket *)ptr;
|
|
|
|
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
while (basket_type_p(tb, basket_type_yielding)) {
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
rb_thread_sleep(0);
|
|
|
|
}
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
// if tb->type is not none, taking is succeeded, but interruption ignore it unfortunately.
|
|
|
|
tb->type.e = basket_type_reserved;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/* :nodoc: */
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector__wait(VALUE selv, VALUE do_receivev, VALUE do_yieldv, VALUE yield_value, VALUE move)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-12-15 18:25:12 +09:00
|
|
|
rb_execution_context_t *ec = GET_EC();
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_selector *s = RACTOR_SELECTOR_PTR(selv);
|
|
|
|
struct rb_ractor_basket *tb = &s->take_basket;
|
|
|
|
struct rb_ractor_basket taken_basket;
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2023-02-24 18:46:17 +09:00
|
|
|
bool do_receive = !!RTEST(do_receivev);
|
|
|
|
bool do_yield = !!RTEST(do_yieldv);
|
|
|
|
VALUE ret_v, ret_r;
|
|
|
|
enum rb_ractor_wait_status wait_status;
|
|
|
|
struct rb_ractor_queue *rq = &cr->sync.recv_queue;
|
|
|
|
struct rb_ractor_queue *ts = &cr->sync.takers_queue;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("start");
|
|
|
|
|
|
|
|
retry:
|
|
|
|
RUBY_DEBUG_LOG("takers:%ld", s->take_ractors->num_entries);
|
|
|
|
|
|
|
|
// setup wait_status
|
|
|
|
wait_status = wait_none;
|
|
|
|
if (s->take_ractors->num_entries > 0) wait_status |= wait_taking;
|
|
|
|
if (do_receive) wait_status |= wait_receiving;
|
|
|
|
if (do_yield) wait_status |= wait_yielding;
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG("wait:%s", wait_status_str(wait_status));
|
|
|
|
|
|
|
|
if (wait_status == wait_none) {
|
|
|
|
rb_raise(rb_eRactorError, "no taking ractors");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// check recv_queue
|
2024-01-30 14:48:59 +09:00
|
|
|
if (do_receive && !UNDEF_P(ret_v = ractor_try_receive(ec, cr, rq))) {
|
2023-02-24 18:46:17 +09:00
|
|
|
ret_r = ID2SYM(rb_intern("receive"));
|
|
|
|
goto success;
|
|
|
|
}
|
2021-01-22 04:38:50 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// check takers
|
|
|
|
if (do_yield && ractor_try_yield(ec, cr, ts, yield_value, move, false, false)) {
|
|
|
|
ret_v = Qnil;
|
|
|
|
ret_r = ID2SYM(rb_intern("yield"));
|
|
|
|
goto success;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// check take_basket
|
|
|
|
VM_ASSERT(basket_type_p(&s->take_basket, basket_type_reserved));
|
|
|
|
s->take_basket.type.e = basket_type_none;
|
|
|
|
// kick all take target ractors
|
|
|
|
st_foreach(s->take_ractors, ractor_selector_wait_i, (st_data_t)tb);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
retry_waiting:
|
|
|
|
while (1) {
|
|
|
|
if (!basket_none_p(tb)) {
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("taken:%s from r:%u", basket_type_name(tb->type.e),
|
|
|
|
tb->sender ? rb_ractor_id(RACTOR_PTR(tb->sender)) : 0);
|
2020-03-10 02:22:11 +09:00
|
|
|
break;
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
if (do_receive && !ractor_queue_empty_p(cr, rq)) {
|
|
|
|
RUBY_DEBUG_LOG("can receive (%d)", rq->cnt);
|
2020-03-10 02:22:11 +09:00
|
|
|
break;
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
if (do_yield && ractor_check_take_basket(cr, ts)) {
|
|
|
|
RUBY_DEBUG_LOG("can yield");
|
2020-03-10 02:22:11 +09:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_sleep_with_cleanup(ec, cr, cur_th, wait_status, ractor_selector_wait_cleanup, tb);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
taken_basket = *tb;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// ensure
|
|
|
|
// tb->type.e = basket_type_reserved # do it atomic in the following code
|
|
|
|
if (taken_basket.type.e == basket_type_yielding ||
|
|
|
|
RUBY_ATOMIC_CAS(tb->type.atomic, taken_basket.type.e, basket_type_reserved) != taken_basket.type.e) {
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
if (basket_type_p(tb, basket_type_yielding)) {
|
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
|
|
|
{
|
|
|
|
rb_thread_sleep(0);
|
|
|
|
}
|
|
|
|
RACTOR_LOCK_SELF(cr);
|
|
|
|
}
|
|
|
|
goto retry_waiting;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
2023-02-24 18:46:17 +09:00
|
|
|
RACTOR_UNLOCK_SELF(cr);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2024-12-22 18:08:39 +09:00
|
|
|
// check the taken result
|
2023-02-24 18:46:17 +09:00
|
|
|
switch (taken_basket.type.e) {
|
|
|
|
case basket_type_none:
|
|
|
|
VM_ASSERT(do_receive || do_yield);
|
|
|
|
goto retry;
|
|
|
|
case basket_type_yielding:
|
|
|
|
rb_bug("unreachable");
|
|
|
|
case basket_type_deleted: {
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_remove(selv, taken_basket.sender);
|
2023-03-03 03:24:59 +09:00
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
rb_ractor_t *r = RACTOR_PTR(taken_basket.sender);
|
2023-03-03 03:24:59 +09:00
|
|
|
if (ractor_take_will_lock(r, &taken_basket)) {
|
|
|
|
RUBY_DEBUG_LOG("has_will");
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
RUBY_DEBUG_LOG("no will");
|
2023-02-24 18:46:17 +09:00
|
|
|
// rb_raise(rb_eRactorClosedError, "The outgoing-port is already closed");
|
|
|
|
// remove and retry wait
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2023-03-02 18:27:44 +09:00
|
|
|
case basket_type_will:
|
|
|
|
// no more messages
|
2023-12-15 18:25:12 +09:00
|
|
|
ractor_selector_remove(selv, taken_basket.sender);
|
2023-03-02 18:27:44 +09:00
|
|
|
break;
|
2023-02-24 18:46:17 +09:00
|
|
|
default:
|
|
|
|
break;
|
2020-09-14 10:30:22 +09:00
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("taken_basket:%s", basket_type_name(taken_basket.type.e));
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
ret_v = ractor_basket_accept(&taken_basket);
|
|
|
|
ret_r = taken_basket.sender;
|
|
|
|
success:
|
|
|
|
return rb_ary_new_from_args(2, ret_r, ret_v);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/*
|
|
|
|
* call-seq:
|
|
|
|
* wait(receive: false, yield_value: undef, move: false) -> [ractor, value]
|
|
|
|
*
|
|
|
|
* Waits until any ractor in _selector_ can be active.
|
|
|
|
*/
|
2023-12-15 18:25:12 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_selector_wait(int argc, VALUE *argv, VALUE selector)
|
|
|
|
{
|
|
|
|
VALUE options;
|
|
|
|
ID keywords[3];
|
|
|
|
VALUE values[3];
|
|
|
|
|
|
|
|
keywords[0] = rb_intern("receive");
|
|
|
|
keywords[1] = rb_intern("yield_value");
|
|
|
|
keywords[2] = rb_intern("move");
|
|
|
|
|
|
|
|
rb_scan_args(argc, argv, "0:", &options);
|
|
|
|
rb_get_kwargs(options, keywords, 0, numberof(values), values);
|
|
|
|
return ractor_selector__wait(selector,
|
|
|
|
values[0] == Qundef ? Qfalse : RTEST(values[0]),
|
|
|
|
values[1] != Qundef, values[1], values[2]);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_selector_new(int argc, VALUE *ractors, VALUE klass)
|
|
|
|
{
|
|
|
|
VALUE selector = ractor_selector_create(klass);
|
|
|
|
|
|
|
|
for (int i=0; i<argc; i++) {
|
|
|
|
ractor_selector_add(selector, ractors[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
return selector;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_select_internal(rb_execution_context_t *ec, VALUE self, VALUE ractors, VALUE do_receive, VALUE do_yield, VALUE yield_value, VALUE move)
|
|
|
|
{
|
|
|
|
VALUE selector = ractor_selector_new(RARRAY_LENINT(ractors), (VALUE *)RARRAY_CONST_PTR(ractors), rb_cRactorSelector);
|
|
|
|
VALUE result;
|
|
|
|
int state;
|
|
|
|
|
|
|
|
EC_PUSH_TAG(ec);
|
2024-05-05 11:14:53 -04:00
|
|
|
if ((state = EC_EXEC_TAG()) == TAG_NONE) {
|
2023-12-15 18:25:12 +09:00
|
|
|
result = ractor_selector__wait(selector, do_receive, do_yield, yield_value, move);
|
|
|
|
}
|
2024-05-05 11:14:53 -04:00
|
|
|
EC_POP_TAG();
|
|
|
|
if (state != TAG_NONE) {
|
2023-12-15 18:25:12 +09:00
|
|
|
// ensure
|
|
|
|
ractor_selector_clear(selector);
|
|
|
|
|
|
|
|
// jump
|
|
|
|
EC_JUMP_TAG(ec, state);
|
|
|
|
}
|
|
|
|
|
|
|
|
RB_GC_GUARD(ractors);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor#close_incoming
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_close_incoming(rb_execution_context_t *ec, rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
VALUE prev;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *r_th = NULL;
|
|
|
|
if (r == rb_ec_ractor_ptr(ec)) {
|
|
|
|
r_th = rb_ec_thread_ptr(ec);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
2020-12-08 00:42:20 +09:00
|
|
|
if (!r->sync.incoming_port_closed) {
|
2020-03-10 02:22:11 +09:00
|
|
|
prev = Qfalse;
|
2020-12-08 00:42:20 +09:00
|
|
|
r->sync.incoming_port_closed = true;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
if (ractor_wakeup(r, r_th, wait_receiving, wakeup_by_close)) {
|
2023-02-24 18:46:17 +09:00
|
|
|
VM_ASSERT(ractor_queue_empty_p(r, &r->sync.recv_queue));
|
2021-09-28 18:00:03 +09:00
|
|
|
RUBY_DEBUG_LOG("cancel receiving");
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
prev = Qtrue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
return prev;
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
// Ractor#close_outgoing
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static VALUE
|
2020-09-24 17:41:10 +09:00
|
|
|
ractor_close_outgoing(rb_execution_context_t *ec, rb_ractor_t *r)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
|
|
|
VALUE prev;
|
|
|
|
|
2020-09-24 17:41:10 +09:00
|
|
|
RACTOR_LOCK(r);
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_queue *ts = &r->sync.takers_queue;
|
|
|
|
rb_ractor_t *tr;
|
|
|
|
struct rb_ractor_basket b;
|
|
|
|
|
2020-12-08 00:42:20 +09:00
|
|
|
if (!r->sync.outgoing_port_closed) {
|
2020-03-10 02:22:11 +09:00
|
|
|
prev = Qfalse;
|
2020-12-08 00:42:20 +09:00
|
|
|
r->sync.outgoing_port_closed = true;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
VM_ASSERT(ractor_queue_empty_p(r, ts));
|
2020-03-10 02:22:11 +09:00
|
|
|
prev = Qtrue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// wakeup all taking ractors
|
2023-02-24 18:46:17 +09:00
|
|
|
while (ractor_queue_deq(r, ts, &b)) {
|
|
|
|
if (basket_type_p(&b, basket_type_take_basket)) {
|
|
|
|
tr = RACTOR_PTR(b.sender);
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *tr_th = b.sending_th;
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_basket *tb = b.p.take.basket;
|
|
|
|
|
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_none, basket_type_yielding) == basket_type_none) {
|
|
|
|
b.p.take.basket->sender = r->pub.self;
|
|
|
|
if (RUBY_ATOMIC_CAS(tb->type.atomic, basket_type_yielding, basket_type_deleted) != basket_type_yielding) {
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
2023-03-03 03:24:59 +09:00
|
|
|
RUBY_DEBUG_LOG("set delete for r:%u", rb_ractor_id(RACTOR_PTR(b.sender)));
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
if (b.p.take.config) {
|
|
|
|
b.p.take.config->closed = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: deadlock-able?
|
|
|
|
RACTOR_LOCK(tr);
|
|
|
|
{
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(tr, tr_th, wait_taking, wakeup_by_close);
|
2023-02-24 18:46:17 +09:00
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(tr);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2020-09-24 17:41:10 +09:00
|
|
|
|
|
|
|
// raising yielding Ractor
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_wakeup(r, NULL, wait_yielding, wakeup_by_close);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
|
|
|
VM_ASSERT(ractor_queue_empty_p(r, ts));
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
2020-09-24 17:41:10 +09:00
|
|
|
RACTOR_UNLOCK(r);
|
2020-03-10 02:22:11 +09:00
|
|
|
return prev;
|
|
|
|
}
|
|
|
|
|
|
|
|
// creation/termination
|
|
|
|
|
|
|
|
static uint32_t
|
|
|
|
ractor_next_id(void)
|
|
|
|
{
|
|
|
|
uint32_t id;
|
|
|
|
|
2021-03-07 10:24:03 +09:00
|
|
|
id = (uint32_t)(RUBY_ATOMIC_FETCH_ADD(ractor_last_id, 1) + 1);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
return id;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-12-05 06:15:17 +09:00
|
|
|
vm_insert_ractor0(rb_vm_t *vm, rb_ractor_t *r, bool single_ractor_mode)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2020-12-20 01:44:41 +09:00
|
|
|
RUBY_DEBUG_LOG("r:%u ractor.cnt:%u++", r->pub.id, vm->ractor.cnt);
|
2020-12-05 06:15:17 +09:00
|
|
|
VM_ASSERT(single_ractor_mode || RB_VM_LOCKED_P());
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_add_tail(&vm->ractor.set, &r->vmlr_node);
|
2020-03-10 02:22:11 +09:00
|
|
|
vm->ractor.cnt++;
|
2024-05-03 12:00:24 -04:00
|
|
|
|
|
|
|
if (r->newobj_cache) {
|
|
|
|
VM_ASSERT(r == ruby_single_main_ractor);
|
|
|
|
}
|
|
|
|
else {
|
2024-11-22 13:30:00 +00:00
|
|
|
r->newobj_cache = rb_gc_ractor_cache_alloc(r);
|
2024-05-03 12:00:24 -04:00
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
2020-12-05 06:15:17 +09:00
|
|
|
static void
|
|
|
|
cancel_single_ractor_mode(void)
|
|
|
|
{
|
|
|
|
// enable multi-ractor mode
|
2021-09-28 18:00:03 +09:00
|
|
|
RUBY_DEBUG_LOG("enable multi-ractor mode");
|
2020-12-05 06:15:17 +09:00
|
|
|
|
2020-12-23 13:34:11 +09:00
|
|
|
ruby_single_main_ractor = NULL;
|
2024-11-05 04:54:06 +09:00
|
|
|
rb_funcall(rb_cRactor, rb_intern("_activated"), 0);
|
2020-12-05 06:15:17 +09:00
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
|
|
|
vm_insert_ractor(rb_vm_t *vm, rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
VM_ASSERT(ractor_status_p(r, ractor_created));
|
|
|
|
|
|
|
|
if (rb_multi_ractor_p()) {
|
|
|
|
RB_VM_LOCK();
|
|
|
|
{
|
2020-12-05 06:15:17 +09:00
|
|
|
vm_insert_ractor0(vm, r, false);
|
2020-03-10 02:22:11 +09:00
|
|
|
vm_ractor_blocking_cnt_inc(vm, r, __FILE__, __LINE__);
|
|
|
|
}
|
|
|
|
RB_VM_UNLOCK();
|
|
|
|
}
|
|
|
|
else {
|
2020-12-05 06:15:17 +09:00
|
|
|
if (vm->ractor.cnt == 0) {
|
2020-03-10 02:22:11 +09:00
|
|
|
// main ractor
|
2020-12-05 06:15:17 +09:00
|
|
|
vm_insert_ractor0(vm, r, true);
|
2020-03-10 02:22:11 +09:00
|
|
|
ractor_status_set(r, ractor_blocking);
|
|
|
|
ractor_status_set(r, ractor_running);
|
|
|
|
}
|
|
|
|
else {
|
2020-12-05 06:15:17 +09:00
|
|
|
cancel_single_ractor_mode();
|
|
|
|
vm_insert_ractor0(vm, r, true);
|
2020-03-10 02:22:11 +09:00
|
|
|
vm_ractor_blocking_cnt_inc(vm, r, __FILE__, __LINE__);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_remove_ractor(rb_vm_t *vm, rb_ractor_t *cr)
|
|
|
|
{
|
|
|
|
VM_ASSERT(ractor_status_p(cr, ractor_running));
|
|
|
|
VM_ASSERT(vm->ractor.cnt > 1);
|
|
|
|
VM_ASSERT(cr->threads.cnt == 1);
|
|
|
|
|
|
|
|
RB_VM_LOCK();
|
|
|
|
{
|
|
|
|
RUBY_DEBUG_LOG("ractor.cnt:%u-- terminate_waiting:%d",
|
|
|
|
vm->ractor.cnt, vm->ractor.sync.terminate_waiting);
|
|
|
|
|
|
|
|
VM_ASSERT(vm->ractor.cnt > 0);
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_del(&cr->vmlr_node);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
if (vm->ractor.cnt <= 2 && vm->ractor.sync.terminate_waiting) {
|
|
|
|
rb_native_cond_signal(&vm->ractor.sync.terminate_cond);
|
|
|
|
}
|
|
|
|
vm->ractor.cnt--;
|
|
|
|
|
2024-05-03 12:00:24 -04:00
|
|
|
rb_gc_ractor_cache_free(cr->newobj_cache);
|
|
|
|
cr->newobj_cache = NULL;
|
2021-06-29 14:32:50 -04:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
ractor_status_set(cr, ractor_terminated);
|
|
|
|
}
|
|
|
|
RB_VM_UNLOCK();
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_alloc(VALUE klass)
|
|
|
|
{
|
|
|
|
rb_ractor_t *r;
|
|
|
|
VALUE rv = TypedData_Make_Struct(klass, rb_ractor_t, &ractor_data_type, r);
|
|
|
|
FL_SET_RAW(rv, RUBY_FL_SHAREABLE);
|
2020-12-20 01:44:41 +09:00
|
|
|
r->pub.self = rv;
|
2020-03-10 02:22:11 +09:00
|
|
|
VM_ASSERT(ractor_status_p(r, ractor_created));
|
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
|
|
|
rb_ractor_t *
|
|
|
|
rb_ractor_main_alloc(void)
|
|
|
|
{
|
2024-04-23 16:32:45 -04:00
|
|
|
rb_ractor_t *r = ruby_mimcalloc(1, sizeof(rb_ractor_t));
|
2020-03-10 02:22:11 +09:00
|
|
|
if (r == NULL) {
|
|
|
|
fprintf(stderr, "[FATAL] failed to allocate memory for main ractor\n");
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
}
|
2020-12-20 01:44:41 +09:00
|
|
|
r->pub.id = ++ractor_last_id;
|
2020-03-10 02:22:11 +09:00
|
|
|
r->loc = Qnil;
|
|
|
|
r->name = Qnil;
|
2020-12-20 01:44:41 +09:00
|
|
|
r->pub.self = Qnil;
|
2024-11-22 13:30:00 +00:00
|
|
|
r->newobj_cache = rb_gc_ractor_cache_alloc(r);
|
2020-12-02 03:37:56 +09:00
|
|
|
ruby_single_main_ractor = r;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2020-12-24 04:29:59 +09:00
|
|
|
#if defined(HAVE_WORKING_FORK)
|
2025-03-25 15:26:55 -07:00
|
|
|
// Set up the main Ractor for the VM after fork.
|
|
|
|
// Puts us in "single Ractor mode"
|
2020-12-24 04:29:59 +09:00
|
|
|
void
|
|
|
|
rb_ractor_atfork(rb_vm_t *vm, rb_thread_t *th)
|
|
|
|
{
|
|
|
|
// initialize as a main ractor
|
|
|
|
vm->ractor.cnt = 0;
|
|
|
|
vm->ractor.blocking_cnt = 0;
|
|
|
|
ruby_single_main_ractor = th->ractor;
|
|
|
|
th->ractor->status_ = ractor_created;
|
|
|
|
|
|
|
|
rb_ractor_living_threads_init(th->ractor);
|
|
|
|
rb_ractor_living_threads_insert(th->ractor, th);
|
|
|
|
|
|
|
|
VM_ASSERT(vm->ractor.blocking_cnt == 0);
|
|
|
|
VM_ASSERT(vm->ractor.cnt == 1);
|
|
|
|
}
|
2025-03-25 15:26:55 -07:00
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_terminate_atfork(rb_vm_t *vm, rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
rb_gc_ractor_cache_free(r->newobj_cache);
|
|
|
|
r->newobj_cache = NULL;
|
|
|
|
r->status_ = ractor_terminated;
|
2025-03-25 15:55:33 -07:00
|
|
|
r->sync.outgoing_port_closed = true;
|
|
|
|
r->sync.incoming_port_closed = true;
|
|
|
|
r->sync.will_basket.type.e = basket_type_none;
|
2025-03-25 15:26:55 -07:00
|
|
|
}
|
2020-12-24 04:29:59 +09:00
|
|
|
#endif
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
void rb_thread_sched_init(struct rb_thread_sched *, bool atfork);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_living_threads_init(rb_ractor_t *r)
|
|
|
|
{
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_head_init(&r->threads.set);
|
2020-03-10 02:22:11 +09:00
|
|
|
r->threads.cnt = 0;
|
|
|
|
r->threads.blocking_cnt = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_init(rb_ractor_t *r, VALUE name, VALUE loc)
|
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
ractor_queue_setup(&r->sync.recv_queue);
|
|
|
|
ractor_queue_setup(&r->sync.takers_queue);
|
2020-12-08 00:42:20 +09:00
|
|
|
rb_native_mutex_initialize(&r->sync.lock);
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_native_cond_initialize(&r->barrier_wait_cond);
|
|
|
|
|
|
|
|
#ifdef RUBY_THREAD_WIN32_H
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_native_cond_initialize(&r->barrier_wait_cond);
|
2023-04-10 10:53:13 +09:00
|
|
|
#endif
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ccan_list_head_init(&r->sync.wait.waiting_threads);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
// thread management
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_thread_sched_init(&r->threads.sched, false);
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_ractor_living_threads_init(r);
|
|
|
|
|
|
|
|
// naming
|
2020-09-18 13:02:14 +07:00
|
|
|
if (!NIL_P(name)) {
|
|
|
|
rb_encoding *enc;
|
|
|
|
StringValueCStr(name);
|
|
|
|
enc = rb_enc_get(name);
|
|
|
|
if (!rb_enc_asciicompat(enc)) {
|
|
|
|
rb_raise(rb_eArgError, "ASCII incompatible encoding (%s)",
|
|
|
|
rb_enc_name(enc));
|
|
|
|
}
|
|
|
|
name = rb_str_new_frozen(name);
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
r->name = name;
|
|
|
|
r->loc = loc;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_main_setup(rb_vm_t *vm, rb_ractor_t *r, rb_thread_t *th)
|
|
|
|
{
|
2020-12-20 01:44:41 +09:00
|
|
|
r->pub.self = TypedData_Wrap_Struct(rb_cRactor, &ractor_data_type, r);
|
|
|
|
FL_SET_RAW(r->pub.self, RUBY_FL_SHAREABLE);
|
2020-03-10 02:22:11 +09:00
|
|
|
ractor_init(r, Qnil, Qnil);
|
|
|
|
r->threads.main = th;
|
|
|
|
rb_ractor_living_threads_insert(r, th);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_create(rb_execution_context_t *ec, VALUE self, VALUE loc, VALUE name, VALUE args, VALUE block)
|
|
|
|
{
|
|
|
|
VALUE rv = ractor_alloc(self);
|
|
|
|
rb_ractor_t *r = RACTOR_PTR(rv);
|
|
|
|
ractor_init(r, name, loc);
|
|
|
|
|
|
|
|
// can block here
|
2020-12-20 01:44:41 +09:00
|
|
|
r->pub.id = ractor_next_id();
|
|
|
|
RUBY_DEBUG_LOG("r:%u", r->pub.id);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
Some global variables can be accessed from ractors
Some global variables should be used from non-main Ractors.
[Bug #17268]
```ruby
# ractor-local (derived from created ractor): debug
'$DEBUG' => $DEBUG,
'$-d' => $-d,
# ractor-local (derived from created ractor): verbose
'$VERBOSE' => $VERBOSE,
'$-w' => $-w,
'$-W' => $-W,
'$-v' => $-v,
# process-local (readonly): other commandline parameters
'$-p' => $-p,
'$-l' => $-l,
'$-a' => $-a,
# process-local (readonly): getpid
'$$' => $$,
# thread local: process result
'$?' => $?,
# scope local: match
'$~' => $~.inspect,
'$&' => $&,
'$`' => $`,
'$\'' => $',
'$+' => $+,
'$1' => $1,
# scope local: last line
'$_' => $_,
# scope local: last backtrace
'$@' => $@,
'$!' => $!,
# ractor local: stdin, out, err
'$stdin' => $stdin.inspect,
'$stdout' => $stdout.inspect,
'$stderr' => $stderr.inspect,
```
2020-10-20 10:46:43 +09:00
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
r->verbose = cr->verbose;
|
|
|
|
r->debug = cr->debug;
|
|
|
|
|
2021-03-06 23:46:56 +00:00
|
|
|
rb_yjit_before_ractor_spawn();
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_thread_create_ractor(r, args, block);
|
|
|
|
|
|
|
|
RB_GC_GUARD(rv);
|
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2024-11-05 04:54:06 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_create_func(VALUE klass, VALUE loc, VALUE name, VALUE args, rb_block_call_func_t func)
|
|
|
|
{
|
|
|
|
VALUE block = rb_proc_new(func, Qnil);
|
|
|
|
return ractor_create(rb_current_ec_noinline(), klass, loc, name, args, block);
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
static void
|
2020-09-24 17:41:10 +09:00
|
|
|
ractor_yield_atexit(rb_execution_context_t *ec, rb_ractor_t *cr, VALUE v, bool exc)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2020-12-08 00:42:20 +09:00
|
|
|
if (cr->sync.outgoing_port_closed) {
|
2020-11-11 01:55:28 +09:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
ASSERT_ractor_unlocking(cr);
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
struct rb_ractor_queue *ts = &cr->sync.takers_queue;
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
rb_thread_t *cur_th = rb_ec_thread_ptr(ec);
|
2020-09-19 17:40:31 +09:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
retry:
|
2023-02-24 18:46:17 +09:00
|
|
|
if (ractor_try_yield(ec, cr, ts, v, Qfalse, exc, true)) {
|
2020-03-10 02:22:11 +09:00
|
|
|
// OK.
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
bool retry = false;
|
|
|
|
RACTOR_LOCK(cr);
|
|
|
|
{
|
2023-02-24 18:46:17 +09:00
|
|
|
if (!ractor_check_take_basket(cr, ts)) {
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
VM_ASSERT(cur_th->ractor_waiting.wait_status == wait_none);
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("leave a will");
|
Get ractor message passing working with > 1 thread sending/receiving values in same ractor
Rework ractors so that any ractor action (Ractor.receive, Ractor#send, Ractor.yield, Ractor#take,
Ractor.select) will operate on the thread that called the action. It will put that thread to sleep if
it's a blocking function and it needs to put it to sleep, and the awakening action (Ractor.yield,
Ractor#send) will wake up the blocked thread.
Before this change every blocking ractor action was associated with the ractor struct and its fields.
If a ractor called Ractor.receive, its wait status was wait_receiving, and when another ractor calls
r.send on it, it will look for that status in the ractor struct fields and wake it up. The problem was that
what if 2 threads call blocking ractor actions in the same ractor. Imagine if 1 thread has called Ractor.receive
and another r.take. Then, when a different ractor calls r.send on it, it doesn't know which ruby thread is associated
to which ractor action, so what ruby thread should it schedule? This change moves some fields onto the ruby thread
itself so that ruby threads are the ones that have ractor blocking statuses, and threads are then specifically scheduled
when unblocked.
Fixes [#17624]
Fixes [#21037]
2025-05-12 18:03:22 -04:00
|
|
|
ractor_basket_fill_will(cr, cur_th, &cr->sync.will_basket, v, exc);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
else {
|
2023-02-24 18:46:17 +09:00
|
|
|
RUBY_DEBUG_LOG("rare timing!");
|
2020-03-10 02:22:11 +09:00
|
|
|
retry = true; // another ractor is waiting for the yield.
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(cr);
|
|
|
|
|
|
|
|
if (retry) goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-24 18:46:17 +09:00
|
|
|
void
|
|
|
|
rb_ractor_atexit(rb_execution_context_t *ec, VALUE result)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
ractor_yield_atexit(ec, cr, result, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_atexit_exception(rb_execution_context_t *ec)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
ractor_yield_atexit(ec, cr, ec->errinfo, true);
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
void
|
|
|
|
rb_ractor_teardown(rb_execution_context_t *ec)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
ractor_close_incoming(ec, cr);
|
|
|
|
ractor_close_outgoing(ec, cr);
|
|
|
|
|
|
|
|
// sync with rb_ractor_terminate_interrupt_main_thread()
|
|
|
|
RB_VM_LOCK_ENTER();
|
|
|
|
{
|
|
|
|
VM_ASSERT(cr->threads.main != NULL);
|
|
|
|
cr->threads.main = NULL;
|
|
|
|
}
|
|
|
|
RB_VM_LOCK_LEAVE();
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2020-10-03 14:05:15 +02:00
|
|
|
rb_ractor_receive_parameters(rb_execution_context_t *ec, rb_ractor_t *r, int len, VALUE *ptr)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
|
|
|
for (int i=0; i<len; i++) {
|
2020-10-03 14:05:15 +02:00
|
|
|
ptr[i] = ractor_receive(ec, r);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_send_parameters(rb_execution_context_t *ec, rb_ractor_t *r, VALUE args)
|
|
|
|
{
|
|
|
|
int len = RARRAY_LENINT(args);
|
|
|
|
for (int i=0; i<len; i++) {
|
|
|
|
ractor_send(ec, r, RARRAY_AREF(args, i), false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-06 21:34:31 -08:00
|
|
|
bool
|
2020-09-04 05:51:55 +09:00
|
|
|
rb_ractor_main_p_(void)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2020-09-04 05:51:55 +09:00
|
|
|
VM_ASSERT(rb_multi_ractor_p());
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_execution_context_t *ec = GET_EC();
|
|
|
|
return rb_ec_ractor_ptr(ec) == rb_ec_vm_ptr(ec)->ractor.main_ractor;
|
|
|
|
}
|
|
|
|
|
2020-09-04 15:17:42 +09:00
|
|
|
bool
|
|
|
|
rb_obj_is_main_ractor(VALUE gv)
|
|
|
|
{
|
|
|
|
if (!rb_ractor_p(gv)) return false;
|
|
|
|
rb_ractor_t *r = DATA_PTR(gv);
|
|
|
|
return r == GET_VM()->ractor.main_ractor;
|
|
|
|
}
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
int
|
|
|
|
rb_ractor_living_thread_num(const rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
return r->threads.cnt;
|
|
|
|
}
|
|
|
|
|
2023-03-30 02:38:08 +09:00
|
|
|
// only for current ractor
|
2020-03-10 02:22:11 +09:00
|
|
|
VALUE
|
2023-03-30 02:38:08 +09:00
|
|
|
rb_ractor_thread_list(void)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
2023-03-30 02:38:08 +09:00
|
|
|
rb_ractor_t *r = GET_RACTOR();
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_thread_t *th = 0;
|
2023-03-30 02:38:08 +09:00
|
|
|
VALUE ary = rb_ary_new();
|
2020-12-24 04:18:17 +09:00
|
|
|
|
2023-03-30 02:38:08 +09:00
|
|
|
ccan_list_for_each(&r->threads.set, th, lt_node) {
|
|
|
|
switch (th->status) {
|
|
|
|
case THREAD_RUNNABLE:
|
|
|
|
case THREAD_STOPPED:
|
|
|
|
case THREAD_STOPPED_FOREVER:
|
|
|
|
rb_ary_push(ary, th->self);
|
|
|
|
default:
|
|
|
|
break;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
2020-12-24 04:18:17 +09:00
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
return ary;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_living_threads_insert(rb_ractor_t *r, rb_thread_t *th)
|
|
|
|
{
|
|
|
|
VM_ASSERT(th != NULL);
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
2020-12-20 01:44:41 +09:00
|
|
|
RUBY_DEBUG_LOG("r(%d)->threads.cnt:%d++", r->pub.id, r->threads.cnt);
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_add_tail(&r->threads.set, &th->lt_node);
|
2020-03-10 02:22:11 +09:00
|
|
|
r->threads.cnt++;
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
|
|
|
|
// first thread for a ractor
|
|
|
|
if (r->threads.cnt == 1) {
|
|
|
|
VM_ASSERT(ractor_status_p(r, ractor_created));
|
|
|
|
vm_insert_ractor(th->vm, r);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vm_ractor_blocking_cnt_inc(rb_vm_t *vm, rb_ractor_t *r, const char *file, int line)
|
|
|
|
{
|
|
|
|
ractor_status_set(r, ractor_blocking);
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG2(file, line, "vm->ractor.blocking_cnt:%d++", vm->ractor.blocking_cnt);
|
|
|
|
vm->ractor.blocking_cnt++;
|
|
|
|
VM_ASSERT(vm->ractor.blocking_cnt <= vm->ractor.cnt);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_vm_ractor_blocking_cnt_inc(rb_vm_t *vm, rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
|
|
|
ASSERT_vm_locking();
|
|
|
|
VM_ASSERT(GET_RACTOR() == cr);
|
|
|
|
vm_ractor_blocking_cnt_inc(vm, cr, file, line);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_vm_ractor_blocking_cnt_dec(rb_vm_t *vm, rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
|
|
|
ASSERT_vm_locking();
|
|
|
|
VM_ASSERT(GET_RACTOR() == cr);
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG2(file, line, "vm->ractor.blocking_cnt:%d--", vm->ractor.blocking_cnt);
|
|
|
|
VM_ASSERT(vm->ractor.blocking_cnt > 0);
|
|
|
|
vm->ractor.blocking_cnt--;
|
|
|
|
|
|
|
|
ractor_status_set(cr, ractor_running);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-09-06 01:58:44 +09:00
|
|
|
ractor_check_blocking(rb_ractor_t *cr, unsigned int remained_thread_cnt, const char *file, int line)
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
|
|
|
VM_ASSERT(cr == GET_RACTOR());
|
|
|
|
|
|
|
|
RUBY_DEBUG_LOG2(file, line,
|
|
|
|
"cr->threads.cnt:%u cr->threads.blocking_cnt:%u vm->ractor.cnt:%u vm->ractor.blocking_cnt:%u",
|
|
|
|
cr->threads.cnt, cr->threads.blocking_cnt,
|
|
|
|
GET_VM()->ractor.cnt, GET_VM()->ractor.blocking_cnt);
|
|
|
|
|
|
|
|
VM_ASSERT(cr->threads.cnt >= cr->threads.blocking_cnt + 1);
|
|
|
|
|
2020-09-06 01:58:44 +09:00
|
|
|
if (remained_thread_cnt > 0 &&
|
2020-03-10 02:22:11 +09:00
|
|
|
// will be block
|
|
|
|
cr->threads.cnt == cr->threads.blocking_cnt + 1) {
|
|
|
|
// change ractor status: running -> blocking
|
|
|
|
rb_vm_t *vm = GET_VM();
|
|
|
|
|
2023-04-04 16:24:59 -04:00
|
|
|
RB_VM_LOCK_ENTER();
|
2020-03-10 02:22:11 +09:00
|
|
|
{
|
|
|
|
rb_vm_ractor_blocking_cnt_inc(vm, cr, file, line);
|
|
|
|
}
|
2023-04-04 16:24:59 -04:00
|
|
|
RB_VM_LOCK_LEAVE();
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
void rb_threadptr_remove(rb_thread_t *th);
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
void
|
|
|
|
rb_ractor_living_threads_remove(rb_ractor_t *cr, rb_thread_t *th)
|
|
|
|
{
|
|
|
|
VM_ASSERT(cr == GET_RACTOR());
|
|
|
|
RUBY_DEBUG_LOG("r->threads.cnt:%d--", cr->threads.cnt);
|
|
|
|
ractor_check_blocking(cr, cr->threads.cnt - 1, __FILE__, __LINE__);
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_threadptr_remove(th);
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
if (cr->threads.cnt == 1) {
|
|
|
|
vm_remove_ractor(th->vm, cr);
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
RACTOR_LOCK(cr);
|
|
|
|
{
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_del(&th->lt_node);
|
2020-03-10 02:22:11 +09:00
|
|
|
cr->threads.cnt--;
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(cr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_blocking_threads_inc(rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
|
|
|
RUBY_DEBUG_LOG2(file, line, "cr->threads.blocking_cnt:%d++", cr->threads.blocking_cnt);
|
|
|
|
|
|
|
|
VM_ASSERT(cr->threads.cnt > 0);
|
|
|
|
VM_ASSERT(cr == GET_RACTOR());
|
|
|
|
|
|
|
|
ractor_check_blocking(cr, cr->threads.cnt, __FILE__, __LINE__);
|
|
|
|
cr->threads.blocking_cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_blocking_threads_dec(rb_ractor_t *cr, const char *file, int line)
|
|
|
|
{
|
|
|
|
RUBY_DEBUG_LOG2(file, line,
|
|
|
|
"r->threads.blocking_cnt:%d--, r->threads.cnt:%u",
|
|
|
|
cr->threads.blocking_cnt, cr->threads.cnt);
|
|
|
|
|
|
|
|
VM_ASSERT(cr == GET_RACTOR());
|
|
|
|
|
|
|
|
if (cr->threads.cnt == cr->threads.blocking_cnt) {
|
|
|
|
rb_vm_t *vm = GET_VM();
|
|
|
|
|
|
|
|
RB_VM_LOCK_ENTER();
|
|
|
|
{
|
|
|
|
rb_vm_ractor_blocking_cnt_dec(vm, cr, __FILE__, __LINE__);
|
|
|
|
}
|
|
|
|
RB_VM_LOCK_LEAVE();
|
|
|
|
}
|
|
|
|
|
|
|
|
cr->threads.blocking_cnt--;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_vm_barrier_interrupt_running_thread(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
VM_ASSERT(r != GET_RACTOR());
|
|
|
|
ASSERT_ractor_unlocking(r);
|
|
|
|
ASSERT_vm_locking();
|
|
|
|
|
|
|
|
RACTOR_LOCK(r);
|
|
|
|
{
|
|
|
|
if (ractor_status_p(r, ractor_running)) {
|
|
|
|
rb_execution_context_t *ec = r->threads.running_ec;
|
|
|
|
if (ec) {
|
|
|
|
RUBY_VM_SET_VM_BARRIER_INTERRUPT(ec);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RACTOR_UNLOCK(r);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_terminate_interrupt_main_thread(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
VM_ASSERT(r != GET_RACTOR());
|
|
|
|
ASSERT_ractor_unlocking(r);
|
|
|
|
ASSERT_vm_locking();
|
|
|
|
|
|
|
|
rb_thread_t *main_th = r->threads.main;
|
|
|
|
if (main_th) {
|
|
|
|
if (main_th->status != THREAD_KILLED) {
|
|
|
|
RUBY_VM_SET_TERMINATE_INTERRUPT(main_th->ec);
|
|
|
|
rb_threadptr_interrupt(main_th);
|
|
|
|
}
|
|
|
|
else {
|
2021-10-03 11:42:31 +09:00
|
|
|
RUBY_DEBUG_LOG("killed (%p)", (void *)main_th);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-10 18:21:11 +09:00
|
|
|
void rb_thread_terminate_all(rb_thread_t *th); // thread.c
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_terminal_interrupt_all(rb_vm_t *vm)
|
|
|
|
{
|
|
|
|
if (vm->ractor.cnt > 1) {
|
|
|
|
// send terminate notification to all ractors
|
2020-09-04 11:46:50 +09:00
|
|
|
rb_ractor_t *r = 0;
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_for_each(&vm->ractor.set, r, vmlr_node) {
|
2020-03-10 02:22:11 +09:00
|
|
|
if (r != vm->ractor.main_ractor) {
|
2023-03-30 02:41:45 +09:00
|
|
|
RUBY_DEBUG_LOG("r:%d", rb_ractor_id(r));
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_ractor_terminate_interrupt_main_thread(r);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-04-10 10:53:13 +09:00
|
|
|
void rb_add_running_thread(rb_thread_t *th);
|
|
|
|
void rb_del_running_thread(rb_thread_t *th);
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
void
|
|
|
|
rb_ractor_terminate_all(void)
|
|
|
|
{
|
|
|
|
rb_vm_t *vm = GET_VM();
|
|
|
|
rb_ractor_t *cr = vm->ractor.main_ractor;
|
|
|
|
|
2023-04-26 17:12:12 +09:00
|
|
|
RUBY_DEBUG_LOG("ractor.cnt:%d", (int)vm->ractor.cnt);
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
VM_ASSERT(cr == GET_RACTOR()); // only main-ractor's main-thread should kick it.
|
|
|
|
|
|
|
|
if (vm->ractor.cnt > 1) {
|
|
|
|
RB_VM_LOCK();
|
2023-03-30 02:41:45 +09:00
|
|
|
{
|
|
|
|
ractor_terminal_interrupt_all(vm); // kill all ractors
|
|
|
|
}
|
2020-03-10 02:22:11 +09:00
|
|
|
RB_VM_UNLOCK();
|
|
|
|
}
|
2020-11-10 18:21:11 +09:00
|
|
|
rb_thread_terminate_all(GET_THREAD()); // kill other threads in main-ractor and wait
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
RB_VM_LOCK();
|
|
|
|
{
|
|
|
|
while (vm->ractor.cnt > 1) {
|
|
|
|
RUBY_DEBUG_LOG("terminate_waiting:%d", vm->ractor.sync.terminate_waiting);
|
|
|
|
vm->ractor.sync.terminate_waiting = true;
|
|
|
|
|
|
|
|
// wait for 1sec
|
|
|
|
rb_vm_ractor_blocking_cnt_inc(vm, cr, __FILE__, __LINE__);
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_del_running_thread(rb_ec_thread_ptr(cr->threads.running_ec));
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_vm_cond_timedwait(vm, &vm->ractor.sync.terminate_cond, 1000 /* ms */);
|
2023-04-10 10:53:13 +09:00
|
|
|
rb_add_running_thread(rb_ec_thread_ptr(cr->threads.running_ec));
|
2020-03-10 02:22:11 +09:00
|
|
|
rb_vm_ractor_blocking_cnt_dec(vm, cr, __FILE__, __LINE__);
|
|
|
|
|
|
|
|
ractor_terminal_interrupt_all(vm);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
RB_VM_UNLOCK();
|
|
|
|
}
|
|
|
|
|
|
|
|
rb_execution_context_t *
|
|
|
|
rb_vm_main_ractor_ec(rb_vm_t *vm)
|
|
|
|
{
|
2024-01-22 16:22:14 +11:00
|
|
|
/* This code needs to carefully work around two bugs:
|
|
|
|
* - Bug #20016: When M:N threading is enabled, running_ec is NULL if no thread is
|
|
|
|
* actually currently running (as opposed to without M:N threading, when
|
|
|
|
* running_ec will still point to the _last_ thread which ran)
|
|
|
|
* - Bug #20197: If the main thread is sleeping, setting its postponed job
|
|
|
|
* interrupt flag is pointless; it won't look at the flag until it stops sleeping
|
|
|
|
* for some reason. It would be better to set the flag on the running ec, which
|
|
|
|
* will presumably look at it soon.
|
|
|
|
*
|
|
|
|
* Solution: use running_ec if it's set, otherwise fall back to the main thread ec.
|
|
|
|
* This is still susceptible to some rare race conditions (what if the last thread
|
|
|
|
* to run just entered a long-running sleep?), but seems like the best balance of
|
|
|
|
* robustness and complexity.
|
|
|
|
*/
|
|
|
|
rb_execution_context_t *running_ec = vm->ractor.main_ractor->threads.running_ec;
|
|
|
|
if (running_ec) { return running_ec; }
|
2023-12-20 15:29:03 -08:00
|
|
|
return vm->ractor.main_thread->ec;
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_moved_missing(int argc, VALUE *argv, VALUE self)
|
|
|
|
{
|
|
|
|
rb_raise(rb_eRactorMovedError, "can not send any methods to a moved object");
|
|
|
|
}
|
|
|
|
|
2023-12-15 18:25:12 +09:00
|
|
|
#ifndef USE_RACTOR_SELECTOR
|
|
|
|
#define USE_RACTOR_SELECTOR 0
|
|
|
|
#endif
|
|
|
|
|
|
|
|
RUBY_SYMBOL_EXPORT_BEGIN
|
|
|
|
void rb_init_ractor_selector(void);
|
|
|
|
RUBY_SYMBOL_EXPORT_END
|
|
|
|
|
2024-12-25 11:13:07 +09:00
|
|
|
/*
|
|
|
|
* Document-class: Ractor::Selector
|
|
|
|
* :nodoc: currently
|
|
|
|
*
|
|
|
|
* Selects multiple Ractors to be activated.
|
|
|
|
*/
|
2023-12-15 18:25:12 +09:00
|
|
|
void
|
|
|
|
rb_init_ractor_selector(void)
|
|
|
|
{
|
|
|
|
rb_cRactorSelector = rb_define_class_under(rb_cRactor, "Selector", rb_cObject);
|
|
|
|
rb_undef_alloc_func(rb_cRactorSelector);
|
|
|
|
|
|
|
|
rb_define_singleton_method(rb_cRactorSelector, "new", ractor_selector_new , -1);
|
|
|
|
rb_define_method(rb_cRactorSelector, "add", ractor_selector_add, 1);
|
|
|
|
rb_define_method(rb_cRactorSelector, "remove", ractor_selector_remove, 1);
|
|
|
|
rb_define_method(rb_cRactorSelector, "clear", ractor_selector_clear, 0);
|
|
|
|
rb_define_method(rb_cRactorSelector, "empty?", ractor_selector_empty_p, 0);
|
|
|
|
rb_define_method(rb_cRactorSelector, "wait", ractor_selector_wait, -1);
|
|
|
|
rb_define_method(rb_cRactorSelector, "_wait", ractor_selector__wait, 4);
|
|
|
|
}
|
|
|
|
|
2020-12-19 20:04:40 +02:00
|
|
|
/*
|
|
|
|
* Document-class: Ractor::ClosedError
|
|
|
|
*
|
2020-12-19 13:08:24 -05:00
|
|
|
* Raised when an attempt is made to send a message to a closed port,
|
|
|
|
* or to retrieve a message from a closed and empty port.
|
|
|
|
* Ports may be closed explicitly with Ractor#close_outgoing/close_incoming
|
|
|
|
* and are closed implicitly when a Ractor terminates.
|
2020-12-19 20:04:40 +02:00
|
|
|
*
|
|
|
|
* r = Ractor.new { sleep(500) }
|
|
|
|
* r.close_outgoing
|
|
|
|
* r.take # Ractor::ClosedError
|
|
|
|
*
|
|
|
|
* ClosedError is a descendant of StopIteration, so the closing of the ractor will break
|
|
|
|
* the loops without propagating the error:
|
|
|
|
*
|
2020-12-22 01:05:52 +09:00
|
|
|
* r = Ractor.new do
|
|
|
|
* loop do
|
|
|
|
* msg = receive # raises ClosedError and loop traps it
|
|
|
|
* puts "Received: #{msg}"
|
|
|
|
* end
|
|
|
|
* puts "loop exited"
|
|
|
|
* end
|
2020-12-19 20:04:40 +02:00
|
|
|
*
|
2020-12-22 01:05:52 +09:00
|
|
|
* 3.times{|i| r << i}
|
|
|
|
* r.close_incoming
|
|
|
|
* r.take
|
2020-12-19 20:04:40 +02:00
|
|
|
* puts "Continue successfully"
|
|
|
|
*
|
|
|
|
* This will print:
|
|
|
|
*
|
2020-12-22 01:05:52 +09:00
|
|
|
* Received: 0
|
|
|
|
* Received: 1
|
|
|
|
* Received: 2
|
|
|
|
* loop exited
|
2020-12-19 20:04:40 +02:00
|
|
|
* Continue successfully
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Document-class: Ractor::RemoteError
|
|
|
|
*
|
|
|
|
* Raised on attempt to Ractor#take if there was an uncaught exception in the Ractor.
|
|
|
|
* Its +cause+ will contain the original exception, and +ractor+ is the original ractor
|
|
|
|
* it was raised in.
|
|
|
|
*
|
|
|
|
* r = Ractor.new { raise "Something weird happened" }
|
|
|
|
*
|
|
|
|
* begin
|
|
|
|
* r.take
|
|
|
|
* rescue => e
|
|
|
|
* p e # => #<Ractor::RemoteError: thrown by remote Ractor.>
|
|
|
|
* p e.ractor == r # => true
|
|
|
|
* p e.cause # => #<RuntimeError: Something weird happened>
|
|
|
|
* end
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Document-class: Ractor::MovedError
|
|
|
|
*
|
|
|
|
* Raised on an attempt to access an object which was moved in Ractor#send or Ractor.yield.
|
|
|
|
*
|
|
|
|
* r = Ractor.new { sleep }
|
|
|
|
*
|
|
|
|
* ary = [1, 2, 3]
|
|
|
|
* r.send(ary, move: true)
|
|
|
|
* ary.inspect
|
|
|
|
* # Ractor::MovedError (can not send any methods to a moved object)
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Document-class: Ractor::MovedObject
|
|
|
|
*
|
|
|
|
* A special object which replaces any value that was moved to another ractor in Ractor#send
|
|
|
|
* or Ractor.yield. Any attempt to access the object results in Ractor::MovedError.
|
|
|
|
*
|
|
|
|
* r = Ractor.new { receive }
|
|
|
|
*
|
|
|
|
* ary = [1, 2, 3]
|
|
|
|
* r.send(ary, move: true)
|
|
|
|
* p Ractor::MovedObject === ary
|
|
|
|
* # => true
|
|
|
|
* ary.inspect
|
|
|
|
* # Ractor::MovedError (can not send any methods to a moved object)
|
|
|
|
*/
|
|
|
|
|
|
|
|
// Main docs are in ractor.rb, but without this clause there are weird artifacts
|
|
|
|
// in their rendering.
|
|
|
|
/*
|
|
|
|
* Document-class: Ractor
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2020-03-10 02:22:11 +09:00
|
|
|
void
|
|
|
|
Init_Ractor(void)
|
|
|
|
{
|
|
|
|
rb_cRactor = rb_define_class("Ractor", rb_cObject);
|
2021-02-18 17:59:40 +09:00
|
|
|
rb_undef_alloc_func(rb_cRactor);
|
|
|
|
|
2020-12-21 18:06:28 +09:00
|
|
|
rb_eRactorError = rb_define_class_under(rb_cRactor, "Error", rb_eRuntimeError);
|
|
|
|
rb_eRactorIsolationError = rb_define_class_under(rb_cRactor, "IsolationError", rb_eRactorError);
|
|
|
|
rb_eRactorRemoteError = rb_define_class_under(rb_cRactor, "RemoteError", rb_eRactorError);
|
|
|
|
rb_eRactorMovedError = rb_define_class_under(rb_cRactor, "MovedError", rb_eRactorError);
|
|
|
|
rb_eRactorClosedError = rb_define_class_under(rb_cRactor, "ClosedError", rb_eStopIteration);
|
|
|
|
rb_eRactorUnsafeError = rb_define_class_under(rb_cRactor, "UnsafeError", rb_eRactorError);
|
2020-03-10 02:22:11 +09:00
|
|
|
|
|
|
|
rb_cRactorMovedObject = rb_define_class_under(rb_cRactor, "MovedObject", rb_cBasicObject);
|
|
|
|
rb_undef_alloc_func(rb_cRactorMovedObject);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "method_missing", ractor_moved_missing, -1);
|
|
|
|
|
|
|
|
// override methods defined in BasicObject
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "__send__", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "!", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "==", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "!=", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "__id__", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "equal?", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "instance_eval", ractor_moved_missing, -1);
|
|
|
|
rb_define_method(rb_cRactorMovedObject, "instance_exec", ractor_moved_missing, -1);
|
2023-02-24 18:46:17 +09:00
|
|
|
|
2024-11-05 04:54:06 +09:00
|
|
|
// internal
|
|
|
|
|
2023-12-15 18:25:12 +09:00
|
|
|
#if USE_RACTOR_SELECTOR
|
|
|
|
rb_init_ractor_selector();
|
|
|
|
#endif
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_dump(void)
|
|
|
|
{
|
|
|
|
rb_vm_t *vm = GET_VM();
|
2020-09-04 11:46:50 +09:00
|
|
|
rb_ractor_t *r = 0;
|
2020-03-10 02:22:11 +09:00
|
|
|
|
2022-03-30 16:36:31 +09:00
|
|
|
ccan_list_for_each(&vm->ractor.set, r, vmlr_node) {
|
2020-03-10 02:22:11 +09:00
|
|
|
if (r != vm->ractor.main_ractor) {
|
2020-12-20 01:44:41 +09:00
|
|
|
fprintf(stderr, "r:%u (%s)\n", r->pub.id, ractor_status_str(r->status_));
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_stdin(void)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
return rb_stdin;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
|
|
|
return cr->r_stdin;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_stdout(void)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
return rb_stdout;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
|
|
|
return cr->r_stdout;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_stderr(void)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
return rb_stderr;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
|
|
|
return cr->r_stderr;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_stdin_set(VALUE in)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
rb_stdin = in;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
2020-12-20 01:44:41 +09:00
|
|
|
RB_OBJ_WRITE(cr->pub.self, &cr->r_stdin, in);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_stdout_set(VALUE out)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
rb_stdout = out;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
2020-12-20 01:44:41 +09:00
|
|
|
RB_OBJ_WRITE(cr->pub.self, &cr->r_stdout, out);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_stderr_set(VALUE err)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
rb_stderr = err;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
2020-12-20 01:44:41 +09:00
|
|
|
RB_OBJ_WRITE(cr->pub.self, &cr->r_stderr, err);
|
2020-03-10 02:22:11 +09:00
|
|
|
}
|
|
|
|
}
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
|
2020-12-19 06:38:58 +09:00
|
|
|
rb_hook_list_t *
|
|
|
|
rb_ractor_hooks(rb_ractor_t *cr)
|
|
|
|
{
|
2020-12-20 01:44:41 +09:00
|
|
|
return &cr->pub.hooks;
|
2020-12-19 06:38:58 +09:00
|
|
|
}
|
|
|
|
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
/// traverse function
|
|
|
|
|
|
|
|
// 2: stop search
|
|
|
|
// 1: skip child
|
|
|
|
// 0: continue
|
2020-10-22 00:43:32 +09:00
|
|
|
|
|
|
|
enum obj_traverse_iterator_result {
|
|
|
|
traverse_cont,
|
|
|
|
traverse_skip,
|
|
|
|
traverse_stop,
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef enum obj_traverse_iterator_result (*rb_obj_traverse_enter_func)(VALUE obj);
|
|
|
|
typedef enum obj_traverse_iterator_result (*rb_obj_traverse_leave_func)(VALUE obj);
|
2020-11-30 16:07:36 +09:00
|
|
|
typedef enum obj_traverse_iterator_result (*rb_obj_traverse_final_func)(VALUE obj);
|
|
|
|
|
|
|
|
static enum obj_traverse_iterator_result null_leave(VALUE obj);
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
|
|
|
|
struct obj_traverse_data {
|
|
|
|
rb_obj_traverse_enter_func enter_func;
|
|
|
|
rb_obj_traverse_leave_func leave_func;
|
|
|
|
|
|
|
|
st_table *rec;
|
2020-10-21 23:00:36 +09:00
|
|
|
VALUE rec_hash;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
struct obj_traverse_callback_data {
|
|
|
|
bool stop;
|
|
|
|
struct obj_traverse_data *data;
|
|
|
|
};
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
static int obj_traverse_i(VALUE obj, struct obj_traverse_data *data);
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
|
|
|
|
static int
|
|
|
|
obj_hash_traverse_i(VALUE key, VALUE val, VALUE ptr)
|
|
|
|
{
|
|
|
|
struct obj_traverse_callback_data *d = (struct obj_traverse_callback_data *)ptr;
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(key, d->data)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
d->stop = true;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(val, d->data)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
d->stop = true;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-10-31 00:40:04 +09:00
|
|
|
obj_traverse_reachable_i(VALUE obj, void *ptr)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
struct obj_traverse_callback_data *d = (struct obj_traverse_callback_data *)ptr;
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(obj, d->data)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
d->stop = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
static struct st_table *
|
|
|
|
obj_traverse_rec(struct obj_traverse_data *data)
|
|
|
|
{
|
|
|
|
if (UNLIKELY(!data->rec)) {
|
|
|
|
data->rec_hash = rb_ident_hash_new();
|
2023-01-31 13:30:50 -05:00
|
|
|
data->rec = RHASH_ST_TABLE(data->rec_hash);
|
2020-10-21 23:00:36 +09:00
|
|
|
}
|
|
|
|
return data->rec;
|
|
|
|
}
|
|
|
|
|
2023-11-28 09:26:41 -05:00
|
|
|
static int
|
|
|
|
obj_traverse_ivar_foreach_i(ID key, VALUE val, st_data_t ptr)
|
|
|
|
{
|
|
|
|
struct obj_traverse_callback_data *d = (struct obj_traverse_callback_data *)ptr;
|
|
|
|
|
|
|
|
if (obj_traverse_i(val, d->data)) {
|
|
|
|
d->stop = true;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
static int
|
2020-10-21 23:00:36 +09:00
|
|
|
obj_traverse_i(VALUE obj, struct obj_traverse_data *data)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
if (RB_SPECIAL_CONST_P(obj)) return 0;
|
|
|
|
|
2020-10-21 23:00:36 +09:00
|
|
|
switch (data->enter_func(obj)) {
|
2020-10-22 00:43:32 +09:00
|
|
|
case traverse_cont: break;
|
|
|
|
case traverse_skip: return 0; // skip children
|
|
|
|
case traverse_stop: return 1; // stop search
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
if (UNLIKELY(st_insert(obj_traverse_rec(data), obj, 1))) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
// already traversed
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-11-28 09:26:41 -05:00
|
|
|
struct obj_traverse_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
};
|
|
|
|
rb_ivar_foreach(obj, obj_traverse_ivar_foreach_i, (st_data_t)&d);
|
|
|
|
if (d.stop) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
|
|
|
|
switch (BUILTIN_TYPE(obj)) {
|
|
|
|
// no child node
|
|
|
|
case T_STRING:
|
|
|
|
case T_FLOAT:
|
|
|
|
case T_BIGNUM:
|
|
|
|
case T_REGEXP:
|
|
|
|
case T_FILE:
|
|
|
|
case T_SYMBOL:
|
|
|
|
case T_MATCH:
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_OBJECT:
|
2023-11-28 09:26:55 -05:00
|
|
|
/* Instance variables already traversed. */
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
break;
|
|
|
|
|
|
|
|
case T_ARRAY:
|
|
|
|
{
|
|
|
|
for (int i = 0; i < RARRAY_LENINT(obj); i++) {
|
|
|
|
VALUE e = rb_ary_entry(obj, i);
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(e, data)) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_HASH:
|
|
|
|
{
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(RHASH_IFNONE(obj), data)) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
|
|
|
|
struct obj_traverse_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
};
|
|
|
|
rb_hash_foreach(obj, obj_hash_traverse_i, (VALUE)&d);
|
|
|
|
if (d.stop) return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_STRUCT:
|
|
|
|
{
|
|
|
|
long len = RSTRUCT_LEN(obj);
|
|
|
|
const VALUE *ptr = RSTRUCT_CONST_PTR(obj);
|
|
|
|
|
|
|
|
for (long i=0; i<len; i++) {
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(ptr[i], data)) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_RATIONAL:
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(RRATIONAL(obj)->num, data)) return 1;
|
|
|
|
if (obj_traverse_i(RRATIONAL(obj)->den, data)) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
break;
|
|
|
|
case T_COMPLEX:
|
2020-10-21 23:00:36 +09:00
|
|
|
if (obj_traverse_i(RCOMPLEX(obj)->real, data)) return 1;
|
|
|
|
if (obj_traverse_i(RCOMPLEX(obj)->imag, data)) return 1;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
break;
|
|
|
|
|
|
|
|
case T_DATA:
|
2020-10-31 00:40:04 +09:00
|
|
|
case T_IMEMO:
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
struct obj_traverse_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
};
|
2021-08-17 09:38:40 -04:00
|
|
|
RB_VM_LOCK_ENTER_NO_BARRIER();
|
|
|
|
{
|
|
|
|
rb_objspace_reachable_objects_from(obj, obj_traverse_reachable_i, &d);
|
|
|
|
}
|
|
|
|
RB_VM_LOCK_LEAVE_NO_BARRIER();
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
if (d.stop) return 1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
// unreachable
|
|
|
|
case T_CLASS:
|
|
|
|
case T_MODULE:
|
|
|
|
case T_ICLASS:
|
|
|
|
default:
|
|
|
|
rp(obj);
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
if (data->leave_func(obj) == traverse_stop) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return 0;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-30 16:07:36 +09:00
|
|
|
struct rb_obj_traverse_final_data {
|
|
|
|
rb_obj_traverse_final_func final_func;
|
|
|
|
int stopped;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
obj_traverse_final_i(st_data_t key, st_data_t val, st_data_t arg)
|
|
|
|
{
|
|
|
|
struct rb_obj_traverse_final_data *data = (void *)arg;
|
|
|
|
if (data->final_func(key)) {
|
|
|
|
data->stopped = 1;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
// 0: traverse all
|
|
|
|
// 1: stopped
|
|
|
|
static int
|
|
|
|
rb_obj_traverse(VALUE obj,
|
|
|
|
rb_obj_traverse_enter_func enter_func,
|
2020-11-30 16:07:36 +09:00
|
|
|
rb_obj_traverse_leave_func leave_func,
|
|
|
|
rb_obj_traverse_final_func final_func)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
struct obj_traverse_data data = {
|
|
|
|
.enter_func = enter_func,
|
|
|
|
.leave_func = leave_func,
|
2020-10-21 23:00:36 +09:00
|
|
|
.rec = NULL,
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
};
|
|
|
|
|
2020-11-30 16:07:36 +09:00
|
|
|
if (obj_traverse_i(obj, &data)) return 1;
|
|
|
|
if (final_func && data.rec) {
|
|
|
|
struct rb_obj_traverse_final_data f = {final_func, 0};
|
|
|
|
st_foreach(data.rec, obj_traverse_final_i, (st_data_t)&f);
|
|
|
|
return f.stopped;
|
|
|
|
}
|
|
|
|
return 0;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2023-01-25 07:45:19 -05:00
|
|
|
allow_frozen_shareable_p(VALUE obj)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
2024-05-27 11:22:39 +02:00
|
|
|
if (!RB_TYPE_P(obj, T_DATA)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
return true;
|
|
|
|
}
|
2020-10-30 00:32:53 +09:00
|
|
|
else if (RTYPEDDATA_P(obj)) {
|
|
|
|
const rb_data_type_t *type = RTYPEDDATA_TYPE(obj);
|
|
|
|
if (type->flags & RUBY_TYPED_FROZEN_SHAREABLE) {
|
|
|
|
return true;
|
|
|
|
}
|
2020-10-21 23:57:44 +09:00
|
|
|
}
|
2020-10-30 00:32:53 +09:00
|
|
|
|
|
|
|
return false;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
2020-10-21 23:00:36 +09:00
|
|
|
make_shareable_check_shareable(VALUE obj)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
VM_ASSERT(!SPECIAL_CONST_P(obj));
|
|
|
|
|
2020-12-21 01:13:39 +09:00
|
|
|
if (rb_ractor_shareable_p(obj)) {
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_skip;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
2023-01-25 07:45:19 -05:00
|
|
|
else if (!allow_frozen_shareable_p(obj)) {
|
|
|
|
if (rb_obj_is_proc(obj)) {
|
|
|
|
rb_proc_ractor_make_shareable(obj);
|
|
|
|
return traverse_cont;
|
2020-10-30 00:32:53 +09:00
|
|
|
}
|
|
|
|
else {
|
2025-05-20 17:28:30 -07:00
|
|
|
rb_raise(rb_eRactorError, "can not make shareable object for %+"PRIsVALUE, obj);
|
2020-10-30 00:32:53 +09:00
|
|
|
}
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2023-01-25 07:45:19 -05:00
|
|
|
if (RB_TYPE_P(obj, T_IMEMO)) {
|
|
|
|
return traverse_skip;
|
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
if (!RB_OBJ_FROZEN_RAW(obj)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
rb_funcall(obj, idFreeze, 0);
|
2020-10-21 23:00:36 +09:00
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
if (UNLIKELY(!RB_OBJ_FROZEN_RAW(obj))) {
|
2020-10-21 23:00:36 +09:00
|
|
|
rb_raise(rb_eRactorError, "#freeze does not freeze object correctly");
|
|
|
|
}
|
2020-10-30 00:32:53 +09:00
|
|
|
|
|
|
|
if (RB_OBJ_SHAREABLE_P(obj)) {
|
|
|
|
return traverse_skip;
|
|
|
|
}
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
2020-10-22 00:43:32 +09:00
|
|
|
|
|
|
|
return traverse_cont;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
2020-10-21 23:00:36 +09:00
|
|
|
mark_shareable(VALUE obj)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
FL_SET_RAW(obj, RUBY_FL_SHAREABLE);
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_cont;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_make_shareable(VALUE obj)
|
|
|
|
{
|
|
|
|
rb_obj_traverse(obj,
|
|
|
|
make_shareable_check_shareable,
|
2020-11-30 15:51:30 -05:00
|
|
|
null_leave, mark_shareable);
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
return obj;
|
|
|
|
}
|
|
|
|
|
2020-12-19 05:52:18 +09:00
|
|
|
VALUE
|
2020-12-21 02:20:57 +09:00
|
|
|
rb_ractor_make_shareable_copy(VALUE obj)
|
2020-12-19 05:52:18 +09:00
|
|
|
{
|
|
|
|
VALUE copy = ractor_copy(obj);
|
2023-07-10 12:10:05 +09:00
|
|
|
return rb_ractor_make_shareable(copy);
|
2020-12-19 05:52:18 +09:00
|
|
|
}
|
|
|
|
|
2020-12-19 20:42:58 +09:00
|
|
|
VALUE
|
|
|
|
rb_ractor_ensure_shareable(VALUE obj, VALUE name)
|
|
|
|
{
|
2020-12-22 08:26:42 +09:00
|
|
|
if (!rb_ractor_shareable_p(obj)) {
|
2020-12-19 20:42:58 +09:00
|
|
|
VALUE message = rb_sprintf("cannot assign unshareable object to %"PRIsVALUE,
|
|
|
|
name);
|
2020-12-22 08:26:42 +09:00
|
|
|
rb_exc_raise(rb_exc_new_str(rb_eRactorIsolationError, message));
|
2020-12-19 20:42:58 +09:00
|
|
|
}
|
|
|
|
return obj;
|
|
|
|
}
|
|
|
|
|
2022-03-24 14:14:07 +09:00
|
|
|
void
|
|
|
|
rb_ractor_ensure_main_ractor(const char *msg)
|
|
|
|
{
|
|
|
|
if (!rb_ractor_main_p()) {
|
|
|
|
rb_raise(rb_eRactorIsolationError, "%s", msg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
2020-10-21 23:00:36 +09:00
|
|
|
shareable_p_enter(VALUE obj)
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
{
|
|
|
|
if (RB_OBJ_SHAREABLE_P(obj)) {
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_skip;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
else if (RB_TYPE_P(obj, T_CLASS) ||
|
|
|
|
RB_TYPE_P(obj, T_MODULE) ||
|
|
|
|
RB_TYPE_P(obj, T_ICLASS)) {
|
|
|
|
// TODO: remove it
|
2020-10-21 23:00:36 +09:00
|
|
|
mark_shareable(obj);
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_skip;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
else if (RB_OBJ_FROZEN_RAW(obj) &&
|
2023-01-25 07:45:19 -05:00
|
|
|
allow_frozen_shareable_p(obj)) {
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_cont;
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2020-10-22 00:43:32 +09:00
|
|
|
return traverse_stop; // fail
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
}
|
|
|
|
|
2023-03-06 21:34:31 -08:00
|
|
|
bool
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
rb_ractor_shareable_p_continue(VALUE obj)
|
|
|
|
{
|
|
|
|
if (rb_obj_traverse(obj,
|
2020-11-30 16:07:36 +09:00
|
|
|
shareable_p_enter, null_leave,
|
2020-10-21 23:00:36 +09:00
|
|
|
mark_shareable)) {
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-31 00:40:04 +09:00
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2024-05-03 12:00:24 -04:00
|
|
|
void
|
|
|
|
rb_ractor_setup_belonging(VALUE obj)
|
|
|
|
{
|
|
|
|
rb_ractor_setup_belonging_to(obj, rb_ractor_current_id());
|
|
|
|
}
|
|
|
|
|
2020-10-31 00:40:04 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
reset_belonging_enter(VALUE obj)
|
|
|
|
{
|
|
|
|
if (rb_ractor_shareable_p(obj)) {
|
|
|
|
return traverse_skip;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_setup_belonging(obj);
|
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
}
|
2020-11-30 16:07:36 +09:00
|
|
|
#endif
|
2020-10-31 00:40:04 +09:00
|
|
|
|
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
null_leave(VALUE obj)
|
|
|
|
{
|
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
static VALUE
|
2020-10-31 00:40:04 +09:00
|
|
|
ractor_reset_belonging(VALUE obj)
|
|
|
|
{
|
|
|
|
#if RACTOR_CHECK_MODE > 0
|
2020-11-30 16:07:36 +09:00
|
|
|
rb_obj_traverse(obj, reset_belonging_enter, null_leave, NULL);
|
2020-10-31 00:40:04 +09:00
|
|
|
#endif
|
2020-11-01 09:56:40 +09:00
|
|
|
return obj;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// traverse and replace function
|
|
|
|
|
|
|
|
// 2: stop search
|
|
|
|
// 1: skip child
|
|
|
|
// 0: continue
|
|
|
|
|
|
|
|
struct obj_traverse_replace_data;
|
|
|
|
static int obj_traverse_replace_i(VALUE obj, struct obj_traverse_replace_data *data);
|
|
|
|
typedef enum obj_traverse_iterator_result (*rb_obj_traverse_replace_enter_func)(VALUE obj, struct obj_traverse_replace_data *data);
|
|
|
|
typedef enum obj_traverse_iterator_result (*rb_obj_traverse_replace_leave_func)(VALUE obj, struct obj_traverse_replace_data *data);
|
|
|
|
|
|
|
|
struct obj_traverse_replace_data {
|
|
|
|
rb_obj_traverse_replace_enter_func enter_func;
|
|
|
|
rb_obj_traverse_replace_leave_func leave_func;
|
|
|
|
|
|
|
|
st_table *rec;
|
|
|
|
VALUE rec_hash;
|
|
|
|
|
|
|
|
VALUE replacement;
|
2020-11-01 10:20:26 +09:00
|
|
|
bool move;
|
2020-11-01 09:56:40 +09:00
|
|
|
};
|
|
|
|
|
|
|
|
struct obj_traverse_replace_callback_data {
|
|
|
|
bool stop;
|
|
|
|
VALUE src;
|
|
|
|
struct obj_traverse_replace_data *data;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
obj_hash_traverse_replace_foreach_i(st_data_t key, st_data_t value, st_data_t argp, int error)
|
|
|
|
{
|
|
|
|
return ST_REPLACE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
obj_hash_traverse_replace_i(st_data_t *key, st_data_t *val, st_data_t ptr, int exists)
|
|
|
|
{
|
|
|
|
struct obj_traverse_replace_callback_data *d = (struct obj_traverse_replace_callback_data *)ptr;
|
|
|
|
struct obj_traverse_replace_data *data = d->data;
|
|
|
|
|
|
|
|
if (obj_traverse_replace_i(*key, data)) {
|
|
|
|
d->stop = true;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
else if (*key != data->replacement) {
|
|
|
|
VALUE v = *key = data->replacement;
|
|
|
|
RB_OBJ_WRITTEN(d->src, Qundef, v);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (obj_traverse_replace_i(*val, data)) {
|
|
|
|
d->stop = true;
|
|
|
|
return ST_STOP;
|
|
|
|
}
|
|
|
|
else if (*val != data->replacement) {
|
|
|
|
VALUE v = *val = data->replacement;
|
|
|
|
RB_OBJ_WRITTEN(d->src, Qundef, v);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2023-03-17 11:29:04 -07:00
|
|
|
static int
|
|
|
|
obj_iv_hash_traverse_replace_foreach_i(st_data_t _key, st_data_t _val, st_data_t _data, int _x)
|
2022-12-08 17:16:52 -05:00
|
|
|
{
|
2023-03-17 11:29:04 -07:00
|
|
|
return ST_REPLACE;
|
2022-12-08 17:16:52 -05:00
|
|
|
}
|
|
|
|
|
2023-03-17 11:29:04 -07:00
|
|
|
static int
|
|
|
|
obj_iv_hash_traverse_replace_i(st_data_t * _key, st_data_t * val, st_data_t ptr, int exists)
|
2022-12-08 17:16:52 -05:00
|
|
|
{
|
|
|
|
struct obj_traverse_replace_callback_data *d = (struct obj_traverse_replace_callback_data *)ptr;
|
|
|
|
struct obj_traverse_replace_data *data = d->data;
|
|
|
|
|
2023-03-17 11:29:04 -07:00
|
|
|
if (obj_traverse_replace_i(*(VALUE *)val, data)) {
|
2022-12-08 17:16:52 -05:00
|
|
|
d->stop = true;
|
2023-03-17 11:29:04 -07:00
|
|
|
return ST_STOP;
|
2022-12-08 17:16:52 -05:00
|
|
|
}
|
2023-03-17 11:29:04 -07:00
|
|
|
else if (*(VALUE *)val != data->replacement) {
|
|
|
|
VALUE v = *(VALUE *)val = data->replacement;
|
2022-12-08 17:16:52 -05:00
|
|
|
RB_OBJ_WRITTEN(d->src, Qundef, v);
|
|
|
|
}
|
|
|
|
|
2023-03-17 11:29:04 -07:00
|
|
|
return ST_CONTINUE;
|
2022-12-08 17:16:52 -05:00
|
|
|
}
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
static struct st_table *
|
|
|
|
obj_traverse_replace_rec(struct obj_traverse_replace_data *data)
|
|
|
|
{
|
|
|
|
if (UNLIKELY(!data->rec)) {
|
|
|
|
data->rec_hash = rb_ident_hash_new();
|
2023-01-31 13:30:50 -05:00
|
|
|
data->rec = RHASH_ST_TABLE(data->rec_hash);
|
2020-11-01 09:56:40 +09:00
|
|
|
}
|
|
|
|
return data->rec;
|
|
|
|
}
|
|
|
|
|
2020-11-06 03:21:08 +09:00
|
|
|
static void
|
|
|
|
obj_refer_only_shareables_p_i(VALUE obj, void *ptr)
|
|
|
|
{
|
|
|
|
int *pcnt = (int *)ptr;
|
|
|
|
|
|
|
|
if (!rb_ractor_shareable_p(obj)) {
|
2022-12-27 11:21:46 +09:00
|
|
|
++*pcnt;
|
2020-11-06 03:21:08 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
obj_refer_only_shareables_p(VALUE obj)
|
|
|
|
{
|
|
|
|
int cnt = 0;
|
2021-08-17 09:38:40 -04:00
|
|
|
RB_VM_LOCK_ENTER_NO_BARRIER();
|
|
|
|
{
|
|
|
|
rb_objspace_reachable_objects_from(obj, obj_refer_only_shareables_p_i, &cnt);
|
|
|
|
}
|
|
|
|
RB_VM_LOCK_LEAVE_NO_BARRIER();
|
2020-11-06 03:21:08 +09:00
|
|
|
return cnt == 0;
|
|
|
|
}
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
static int
|
|
|
|
obj_traverse_replace_i(VALUE obj, struct obj_traverse_replace_data *data)
|
|
|
|
{
|
2022-04-07 19:19:13 +09:00
|
|
|
st_data_t replacement;
|
2020-11-01 09:56:40 +09:00
|
|
|
|
|
|
|
if (RB_SPECIAL_CONST_P(obj)) {
|
|
|
|
data->replacement = obj;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (data->enter_func(obj, data)) {
|
|
|
|
case traverse_cont: break;
|
|
|
|
case traverse_skip: return 0; // skip children
|
|
|
|
case traverse_stop: return 1; // stop search
|
|
|
|
}
|
|
|
|
|
2022-04-07 19:19:13 +09:00
|
|
|
replacement = (st_data_t)data->replacement;
|
2020-11-01 09:56:40 +09:00
|
|
|
|
2022-04-07 19:19:13 +09:00
|
|
|
if (UNLIKELY(st_lookup(obj_traverse_replace_rec(data), (st_data_t)obj, &replacement))) {
|
|
|
|
data->replacement = (VALUE)replacement;
|
2020-11-01 09:56:40 +09:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
else {
|
2022-04-07 19:19:13 +09:00
|
|
|
st_insert(obj_traverse_replace_rec(data), (st_data_t)obj, replacement);
|
2020-11-01 09:56:40 +09:00
|
|
|
}
|
|
|
|
|
2020-11-01 10:20:26 +09:00
|
|
|
if (!data->move) {
|
|
|
|
obj = replacement;
|
|
|
|
}
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
#define CHECK_AND_REPLACE(v) do { \
|
|
|
|
VALUE _val = (v); \
|
|
|
|
if (obj_traverse_replace_i(_val, data)) { return 1; } \
|
|
|
|
else if (data->replacement != _val) { RB_OBJ_WRITE(obj, &v, data->replacement); } \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
if (UNLIKELY(FL_TEST_RAW(obj, FL_EXIVAR))) {
|
2025-04-30 09:42:57 +02:00
|
|
|
struct gen_fields_tbl *fields_tbl;
|
|
|
|
rb_ivar_generic_fields_tbl_lookup(obj, &fields_tbl);
|
2023-11-28 09:26:41 -05:00
|
|
|
|
2025-05-08 20:17:09 +02:00
|
|
|
if (UNLIKELY(rb_shape_obj_too_complex_p(obj))) {
|
2023-11-28 09:26:41 -05:00
|
|
|
struct obj_traverse_replace_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
.src = obj,
|
|
|
|
};
|
|
|
|
rb_st_foreach_with_replace(
|
2025-04-30 09:42:57 +02:00
|
|
|
fields_tbl->as.complex.table,
|
2023-11-28 09:26:41 -05:00
|
|
|
obj_iv_hash_traverse_replace_foreach_i,
|
|
|
|
obj_iv_hash_traverse_replace_i,
|
|
|
|
(st_data_t)&d
|
|
|
|
);
|
|
|
|
if (d.stop) return 1;
|
|
|
|
}
|
|
|
|
else {
|
2025-04-30 09:42:57 +02:00
|
|
|
for (uint32_t i = 0; i < fields_tbl->as.shape.fields_count; i++) {
|
|
|
|
if (!UNDEF_P(fields_tbl->as.shape.fields[i])) {
|
|
|
|
CHECK_AND_REPLACE(fields_tbl->as.shape.fields[i]);
|
2023-11-28 09:26:41 -05:00
|
|
|
}
|
2020-11-01 09:56:40 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (BUILTIN_TYPE(obj)) {
|
|
|
|
// no child node
|
|
|
|
case T_FLOAT:
|
|
|
|
case T_BIGNUM:
|
|
|
|
case T_REGEXP:
|
|
|
|
case T_FILE:
|
|
|
|
case T_SYMBOL:
|
|
|
|
case T_MATCH:
|
|
|
|
break;
|
2020-11-01 10:20:26 +09:00
|
|
|
case T_STRING:
|
2020-12-01 16:34:59 +09:00
|
|
|
rb_str_make_independent(obj);
|
2020-11-01 10:20:26 +09:00
|
|
|
break;
|
2020-11-01 09:56:40 +09:00
|
|
|
|
|
|
|
case T_OBJECT:
|
|
|
|
{
|
2025-05-08 20:17:09 +02:00
|
|
|
if (rb_shape_obj_too_complex_p(obj)) {
|
2022-12-08 17:16:52 -05:00
|
|
|
struct obj_traverse_replace_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
.src = obj,
|
|
|
|
};
|
2023-03-17 11:29:04 -07:00
|
|
|
rb_st_foreach_with_replace(
|
2025-04-30 09:42:57 +02:00
|
|
|
ROBJECT_FIELDS_HASH(obj),
|
2023-11-28 09:26:55 -05:00
|
|
|
obj_iv_hash_traverse_replace_foreach_i,
|
|
|
|
obj_iv_hash_traverse_replace_i,
|
|
|
|
(st_data_t)&d
|
|
|
|
);
|
|
|
|
if (d.stop) return 1;
|
2022-12-08 17:16:52 -05:00
|
|
|
}
|
|
|
|
else {
|
2025-04-30 09:42:57 +02:00
|
|
|
uint32_t len = ROBJECT_FIELDS_COUNT(obj);
|
|
|
|
VALUE *ptr = ROBJECT_FIELDS(obj);
|
2020-11-01 09:56:40 +09:00
|
|
|
|
2023-11-28 09:26:55 -05:00
|
|
|
for (uint32_t i = 0; i < len; i++) {
|
|
|
|
CHECK_AND_REPLACE(ptr[i]);
|
2020-11-01 09:56:40 +09:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_ARRAY:
|
|
|
|
{
|
2020-12-01 11:14:36 +09:00
|
|
|
rb_ary_cancel_sharing(obj);
|
2020-11-01 10:20:26 +09:00
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
for (int i = 0; i < RARRAY_LENINT(obj); i++) {
|
|
|
|
VALUE e = rb_ary_entry(obj, i);
|
2020-11-01 10:20:26 +09:00
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
if (obj_traverse_replace_i(e, data)) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
else if (e != data->replacement) {
|
|
|
|
RARRAY_ASET(obj, i, data->replacement);
|
|
|
|
}
|
|
|
|
}
|
2020-11-01 10:20:26 +09:00
|
|
|
RB_GC_GUARD(obj);
|
2020-11-01 09:56:40 +09:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case T_HASH:
|
|
|
|
{
|
|
|
|
struct obj_traverse_replace_callback_data d = {
|
|
|
|
.stop = false,
|
|
|
|
.data = data,
|
|
|
|
.src = obj,
|
|
|
|
};
|
|
|
|
rb_hash_stlike_foreach_with_replace(obj,
|
|
|
|
obj_hash_traverse_replace_foreach_i,
|
|
|
|
obj_hash_traverse_replace_i,
|
|
|
|
(VALUE)&d);
|
|
|
|
if (d.stop) return 1;
|
|
|
|
// TODO: rehash here?
|
|
|
|
|
|
|
|
VALUE ifnone = RHASH_IFNONE(obj);
|
|
|
|
if (obj_traverse_replace_i(ifnone, data)) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
else if (ifnone != data->replacement) {
|
|
|
|
RHASH_SET_IFNONE(obj, data->replacement);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_STRUCT:
|
|
|
|
{
|
|
|
|
long len = RSTRUCT_LEN(obj);
|
|
|
|
const VALUE *ptr = RSTRUCT_CONST_PTR(obj);
|
|
|
|
|
|
|
|
for (long i=0; i<len; i++) {
|
|
|
|
CHECK_AND_REPLACE(ptr[i]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_RATIONAL:
|
|
|
|
CHECK_AND_REPLACE(RRATIONAL(obj)->num);
|
|
|
|
CHECK_AND_REPLACE(RRATIONAL(obj)->den);
|
|
|
|
break;
|
|
|
|
case T_COMPLEX:
|
|
|
|
CHECK_AND_REPLACE(RCOMPLEX(obj)->real);
|
|
|
|
CHECK_AND_REPLACE(RCOMPLEX(obj)->imag);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case T_DATA:
|
2020-11-06 03:21:08 +09:00
|
|
|
if (!data->move && obj_refer_only_shareables_p(obj)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_raise(rb_eRactorError, "can not %s %"PRIsVALUE" object.",
|
|
|
|
data->move ? "move" : "copy", rb_class_of(obj));
|
|
|
|
}
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
case T_IMEMO:
|
|
|
|
// not supported yet
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
// unreachable
|
|
|
|
case T_CLASS:
|
|
|
|
case T_MODULE:
|
|
|
|
case T_ICLASS:
|
|
|
|
default:
|
|
|
|
rp(obj);
|
|
|
|
rb_bug("unreachable");
|
|
|
|
}
|
|
|
|
|
2022-04-07 19:19:13 +09:00
|
|
|
data->replacement = (VALUE)replacement;
|
2020-11-01 09:56:40 +09:00
|
|
|
|
|
|
|
if (data->leave_func(obj, data) == traverse_stop) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// 0: traverse all
|
|
|
|
// 1: stopped
|
|
|
|
static VALUE
|
|
|
|
rb_obj_traverse_replace(VALUE obj,
|
2020-11-01 10:20:26 +09:00
|
|
|
rb_obj_traverse_replace_enter_func enter_func,
|
|
|
|
rb_obj_traverse_replace_leave_func leave_func,
|
|
|
|
bool move)
|
2020-11-01 09:56:40 +09:00
|
|
|
{
|
|
|
|
struct obj_traverse_replace_data data = {
|
|
|
|
.enter_func = enter_func,
|
|
|
|
.leave_func = leave_func,
|
|
|
|
.rec = NULL,
|
|
|
|
.replacement = Qundef,
|
2020-11-01 10:20:26 +09:00
|
|
|
.move = move,
|
2020-11-01 09:56:40 +09:00
|
|
|
};
|
|
|
|
|
|
|
|
if (obj_traverse_replace_i(obj, &data)) {
|
|
|
|
return Qundef;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return data.replacement;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2025-04-04 13:28:51 +02:00
|
|
|
static const bool wb_protected_types[RUBY_T_MASK] = {
|
|
|
|
[T_OBJECT] = RGENGC_WB_PROTECTED_OBJECT,
|
|
|
|
[T_HASH] = RGENGC_WB_PROTECTED_HASH,
|
|
|
|
[T_ARRAY] = RGENGC_WB_PROTECTED_ARRAY,
|
|
|
|
[T_STRING] = RGENGC_WB_PROTECTED_STRING,
|
|
|
|
[T_STRUCT] = RGENGC_WB_PROTECTED_STRUCT,
|
|
|
|
[T_COMPLEX] = RGENGC_WB_PROTECTED_COMPLEX,
|
|
|
|
[T_REGEXP] = RGENGC_WB_PROTECTED_REGEXP,
|
|
|
|
[T_MATCH] = RGENGC_WB_PROTECTED_MATCH,
|
|
|
|
[T_FLOAT] = RGENGC_WB_PROTECTED_FLOAT,
|
|
|
|
[T_RATIONAL] = RGENGC_WB_PROTECTED_RATIONAL,
|
|
|
|
};
|
|
|
|
|
2020-11-01 09:56:40 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
move_enter(VALUE obj, struct obj_traverse_replace_data *data)
|
|
|
|
{
|
|
|
|
if (rb_ractor_shareable_p(obj)) {
|
|
|
|
data->replacement = obj;
|
|
|
|
return traverse_skip;
|
|
|
|
}
|
|
|
|
else {
|
2025-04-04 13:28:51 +02:00
|
|
|
VALUE type = RB_BUILTIN_TYPE(obj);
|
|
|
|
type |= wb_protected_types[type] ? FL_WB_PROTECTED : 0;
|
|
|
|
NEWOBJ_OF(moved, struct RBasic, 0, type, rb_gc_obj_slot_size(obj), 0);
|
|
|
|
data->replacement = (VALUE)moved;
|
2020-11-01 09:56:40 +09:00
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
move_leave(VALUE obj, struct obj_traverse_replace_data *data)
|
|
|
|
{
|
2025-04-04 13:28:51 +02:00
|
|
|
size_t size = rb_gc_obj_slot_size(obj);
|
|
|
|
memcpy((void *)data->replacement, (void *)obj, size);
|
|
|
|
|
|
|
|
void rb_replace_generic_ivar(VALUE clone, VALUE obj); // variable.c
|
|
|
|
|
2025-04-21 16:16:07 +09:00
|
|
|
rb_gc_obj_id_moved(data->replacement);
|
|
|
|
|
2025-04-04 13:28:51 +02:00
|
|
|
if (UNLIKELY(FL_TEST_RAW(obj, FL_EXIVAR))) {
|
|
|
|
rb_replace_generic_ivar(data->replacement, obj);
|
|
|
|
}
|
|
|
|
|
2025-05-22 16:43:51 -04:00
|
|
|
VALUE flags = T_OBJECT | FL_FREEZE | (RBASIC(obj)->flags & FL_PROMOTED);
|
|
|
|
|
2025-04-04 13:28:51 +02:00
|
|
|
// Avoid mutations using bind_call, etc.
|
|
|
|
MEMZERO((char *)obj + sizeof(struct RBasic), char, size - sizeof(struct RBasic));
|
2025-05-22 16:43:51 -04:00
|
|
|
RBASIC(obj)->flags = flags;
|
Ractor: Fix moving embedded objects
[Bug #20271]
[Bug #20267]
[Bug #20255]
`rb_obj_alloc(RBASIC_CLASS(obj))` will always allocate from the basic
40B pool, so if `obj` is larger than `40B`, we'll create a corrupted
object when we later copy the shape_id.
Instead we can use the same logic than ractor copy, which is
to use `rb_obj_clone`, and later ask the GC to free the original
object.
We then must turn it into a `T_OBJECT`, because otherwise
just changing its class to `RactorMoved` leaves a lot of
ways to keep using the object, e.g.:
```
a = [1, 2, 3]
Ractor.new{}.send(a, move: true)
[].concat(a) # Should raise, but wasn't.
```
If it turns out that `rb_obj_clone` isn't performant enough
for some uses, we can always have carefully crafted specialized
paths for the types that would benefit from it.
2025-03-27 14:26:59 +01:00
|
|
|
RBASIC_SET_CLASS_RAW(obj, rb_cRactorMovedObject);
|
2020-11-01 09:56:40 +09:00
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_move(VALUE obj)
|
|
|
|
{
|
2020-11-01 10:20:26 +09:00
|
|
|
VALUE val = rb_obj_traverse_replace(obj, move_enter, move_leave, true);
|
2022-11-15 13:24:08 +09:00
|
|
|
if (!UNDEF_P(val)) {
|
2020-11-01 09:56:40 +09:00
|
|
|
return val;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_raise(rb_eRactorError, "can not move the object");
|
|
|
|
}
|
2020-10-31 00:40:04 +09:00
|
|
|
}
|
|
|
|
|
2020-11-01 10:20:26 +09:00
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
copy_enter(VALUE obj, struct obj_traverse_replace_data *data)
|
|
|
|
{
|
|
|
|
if (rb_ractor_shareable_p(obj)) {
|
|
|
|
data->replacement = obj;
|
|
|
|
return traverse_skip;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
data->replacement = rb_obj_clone(obj);
|
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static enum obj_traverse_iterator_result
|
|
|
|
copy_leave(VALUE obj, struct obj_traverse_replace_data *data)
|
|
|
|
{
|
|
|
|
return traverse_cont;
|
|
|
|
}
|
|
|
|
|
2020-12-01 11:14:36 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_copy(VALUE obj)
|
2020-11-01 10:20:26 +09:00
|
|
|
{
|
|
|
|
VALUE val = rb_obj_traverse_replace(obj, copy_enter, copy_leave, false);
|
2022-11-15 13:24:08 +09:00
|
|
|
if (!UNDEF_P(val)) {
|
2020-11-01 10:20:26 +09:00
|
|
|
return val;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_raise(rb_eRactorError, "can not copy the object");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-28 04:39:09 +09:00
|
|
|
// Ractor local storage
|
|
|
|
|
|
|
|
struct rb_ractor_local_key_struct {
|
|
|
|
const struct rb_ractor_local_storage_type *type;
|
|
|
|
void *main_cache;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct freed_ractor_local_keys_struct {
|
|
|
|
int cnt;
|
|
|
|
int capa;
|
|
|
|
rb_ractor_local_key_t *keys;
|
|
|
|
} freed_ractor_local_keys;
|
|
|
|
|
|
|
|
static int
|
|
|
|
ractor_local_storage_mark_i(st_data_t key, st_data_t val, st_data_t dmy)
|
|
|
|
{
|
|
|
|
struct rb_ractor_local_key_struct *k = (struct rb_ractor_local_key_struct *)key;
|
|
|
|
if (k->type->mark) (*k->type->mark)((void *)val);
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2020-12-22 01:55:15 +09:00
|
|
|
static enum rb_id_table_iterator_result
|
2024-10-18 11:17:36 -04:00
|
|
|
idkey_local_storage_mark_i(VALUE val, void *dmy)
|
2020-12-22 01:55:15 +09:00
|
|
|
{
|
|
|
|
rb_gc_mark(val);
|
|
|
|
return ID_TABLE_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2020-11-28 04:39:09 +09:00
|
|
|
static void
|
|
|
|
ractor_local_storage_mark(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
if (r->local_storage) {
|
|
|
|
st_foreach(r->local_storage, ractor_local_storage_mark_i, 0);
|
|
|
|
|
|
|
|
for (int i=0; i<freed_ractor_local_keys.cnt; i++) {
|
|
|
|
rb_ractor_local_key_t key = freed_ractor_local_keys.keys[i];
|
2022-04-07 19:19:13 +09:00
|
|
|
st_data_t val, k = (st_data_t)key;
|
|
|
|
if (st_delete(r->local_storage, &k, &val) &&
|
|
|
|
(key = (rb_ractor_local_key_t)k)->type->free) {
|
2020-11-28 04:39:09 +09:00
|
|
|
(*key->type->free)((void *)val);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-12-22 01:55:15 +09:00
|
|
|
|
|
|
|
if (r->idkey_local_storage) {
|
2024-10-18 11:17:36 -04:00
|
|
|
rb_id_table_foreach_values(r->idkey_local_storage, idkey_local_storage_mark_i, NULL);
|
2020-12-22 01:55:15 +09:00
|
|
|
}
|
2024-12-13 16:18:17 +09:00
|
|
|
|
|
|
|
rb_gc_mark(r->local_storage_store_lock);
|
2020-11-28 04:39:09 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ractor_local_storage_free_i(st_data_t key, st_data_t val, st_data_t dmy)
|
|
|
|
{
|
|
|
|
struct rb_ractor_local_key_struct *k = (struct rb_ractor_local_key_struct *)key;
|
|
|
|
if (k->type->free) (*k->type->free)((void *)val);
|
|
|
|
return ST_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_local_storage_free(rb_ractor_t *r)
|
|
|
|
{
|
|
|
|
if (r->local_storage) {
|
|
|
|
st_foreach(r->local_storage, ractor_local_storage_free_i, 0);
|
|
|
|
st_free_table(r->local_storage);
|
|
|
|
}
|
2020-12-22 01:55:15 +09:00
|
|
|
|
|
|
|
if (r->idkey_local_storage) {
|
|
|
|
rb_id_table_free(r->idkey_local_storage);
|
|
|
|
}
|
2020-11-28 04:39:09 +09:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rb_ractor_local_storage_value_mark(void *ptr)
|
|
|
|
{
|
|
|
|
rb_gc_mark((VALUE)ptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct rb_ractor_local_storage_type ractor_local_storage_type_null = {
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
const struct rb_ractor_local_storage_type rb_ractor_local_storage_type_free = {
|
|
|
|
NULL,
|
|
|
|
ruby_xfree,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct rb_ractor_local_storage_type ractor_local_storage_type_value = {
|
|
|
|
rb_ractor_local_storage_value_mark,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
rb_ractor_local_key_t
|
|
|
|
rb_ractor_local_storage_ptr_newkey(const struct rb_ractor_local_storage_type *type)
|
|
|
|
{
|
|
|
|
rb_ractor_local_key_t key = ALLOC(struct rb_ractor_local_key_struct);
|
|
|
|
key->type = type ? type : &ractor_local_storage_type_null;
|
|
|
|
key->main_cache = (void *)Qundef;
|
|
|
|
return key;
|
|
|
|
}
|
|
|
|
|
|
|
|
rb_ractor_local_key_t
|
|
|
|
rb_ractor_local_storage_value_newkey(void)
|
|
|
|
{
|
|
|
|
return rb_ractor_local_storage_ptr_newkey(&ractor_local_storage_type_value);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_local_storage_delkey(rb_ractor_local_key_t key)
|
|
|
|
{
|
|
|
|
RB_VM_LOCK_ENTER();
|
|
|
|
{
|
|
|
|
if (freed_ractor_local_keys.cnt == freed_ractor_local_keys.capa) {
|
|
|
|
freed_ractor_local_keys.capa = freed_ractor_local_keys.capa ? freed_ractor_local_keys.capa * 2 : 4;
|
|
|
|
REALLOC_N(freed_ractor_local_keys.keys, rb_ractor_local_key_t, freed_ractor_local_keys.capa);
|
|
|
|
}
|
|
|
|
freed_ractor_local_keys.keys[freed_ractor_local_keys.cnt++] = key;
|
|
|
|
}
|
|
|
|
RB_VM_LOCK_LEAVE();
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
ractor_local_ref(rb_ractor_local_key_t key, void **pret)
|
|
|
|
{
|
|
|
|
if (rb_ractor_main_p()) {
|
2022-11-15 13:24:08 +09:00
|
|
|
if (!UNDEF_P((VALUE)key->main_cache)) {
|
2020-11-28 04:39:09 +09:00
|
|
|
*pret = key->main_cache;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
|
|
|
|
|
|
|
if (cr->local_storage && st_lookup(cr->local_storage, (st_data_t)key, (st_data_t *)pret)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ractor_local_set(rb_ractor_local_key_t key, void *ptr)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = GET_RACTOR();
|
|
|
|
|
|
|
|
if (cr->local_storage == NULL) {
|
|
|
|
cr->local_storage = st_init_numtable();
|
|
|
|
}
|
|
|
|
|
|
|
|
st_insert(cr->local_storage, (st_data_t)key, (st_data_t)ptr);
|
|
|
|
|
|
|
|
if (rb_ractor_main_p()) {
|
|
|
|
key->main_cache = ptr;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_local_storage_value(rb_ractor_local_key_t key)
|
|
|
|
{
|
2022-04-07 19:19:13 +09:00
|
|
|
void *val;
|
|
|
|
if (ractor_local_ref(key, &val)) {
|
|
|
|
return (VALUE)val;
|
2020-11-28 04:39:09 +09:00
|
|
|
}
|
|
|
|
else {
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-06 15:30:56 +09:00
|
|
|
bool
|
|
|
|
rb_ractor_local_storage_value_lookup(rb_ractor_local_key_t key, VALUE *val)
|
|
|
|
{
|
|
|
|
if (ractor_local_ref(key, (void **)val)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-28 04:39:09 +09:00
|
|
|
void
|
|
|
|
rb_ractor_local_storage_value_set(rb_ractor_local_key_t key, VALUE val)
|
|
|
|
{
|
|
|
|
ractor_local_set(key, (void *)val);
|
|
|
|
}
|
|
|
|
|
|
|
|
void *
|
|
|
|
rb_ractor_local_storage_ptr(rb_ractor_local_key_t key)
|
|
|
|
{
|
|
|
|
void *ret;
|
|
|
|
if (ractor_local_ref(key, &ret)) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_local_storage_ptr_set(rb_ractor_local_key_t key, void *ptr)
|
|
|
|
{
|
|
|
|
ractor_local_set(key, ptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define DEFAULT_KEYS_CAPA 0x10
|
|
|
|
|
|
|
|
void
|
|
|
|
rb_ractor_finish_marking(void)
|
|
|
|
{
|
|
|
|
for (int i=0; i<freed_ractor_local_keys.cnt; i++) {
|
|
|
|
ruby_xfree(freed_ractor_local_keys.keys[i]);
|
|
|
|
}
|
|
|
|
freed_ractor_local_keys.cnt = 0;
|
|
|
|
if (freed_ractor_local_keys.capa > DEFAULT_KEYS_CAPA) {
|
|
|
|
freed_ractor_local_keys.capa = DEFAULT_KEYS_CAPA;
|
|
|
|
REALLOC_N(freed_ractor_local_keys.keys, rb_ractor_local_key_t, DEFAULT_KEYS_CAPA);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-22 01:55:15 +09:00
|
|
|
static VALUE
|
|
|
|
ractor_local_value(rb_execution_context_t *ec, VALUE self, VALUE sym)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
ID id = rb_check_id(&sym);
|
|
|
|
struct rb_id_table *tbl = cr->idkey_local_storage;
|
|
|
|
VALUE val;
|
|
|
|
|
|
|
|
if (id && tbl && rb_id_table_lookup(tbl, id, &val)) {
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_local_value_set(rb_execution_context_t *ec, VALUE self, VALUE sym, VALUE val)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
ID id = SYM2ID(rb_to_symbol(sym));
|
|
|
|
struct rb_id_table *tbl = cr->idkey_local_storage;
|
|
|
|
|
|
|
|
if (tbl == NULL) {
|
|
|
|
tbl = cr->idkey_local_storage = rb_id_table_create(2);
|
|
|
|
}
|
|
|
|
rb_id_table_insert(tbl, id, val);
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
2024-12-13 04:52:34 +09:00
|
|
|
struct ractor_local_storage_store_data {
|
|
|
|
rb_execution_context_t *ec;
|
|
|
|
struct rb_id_table *tbl;
|
|
|
|
ID id;
|
|
|
|
VALUE sym;
|
|
|
|
};
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_local_value_store_i(VALUE ptr)
|
|
|
|
{
|
|
|
|
VALUE val;
|
|
|
|
struct ractor_local_storage_store_data *data = (struct ractor_local_storage_store_data *)ptr;
|
|
|
|
|
|
|
|
if (rb_id_table_lookup(data->tbl, data->id, &val)) {
|
2024-12-22 18:08:39 +09:00
|
|
|
// after synchronization, we found already registered entry
|
2024-12-13 04:52:34 +09:00
|
|
|
}
|
|
|
|
else {
|
|
|
|
val = rb_yield(Qnil);
|
|
|
|
ractor_local_value_set(data->ec, Qnil, data->sym, val);
|
|
|
|
}
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_local_value_store_if_absent(rb_execution_context_t *ec, VALUE self, VALUE sym)
|
|
|
|
{
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
struct ractor_local_storage_store_data data = {
|
|
|
|
.ec = ec,
|
|
|
|
.sym = sym,
|
|
|
|
.id = SYM2ID(rb_to_symbol(sym)),
|
|
|
|
.tbl = cr->idkey_local_storage,
|
|
|
|
};
|
|
|
|
VALUE val;
|
|
|
|
|
|
|
|
if (data.tbl == NULL) {
|
|
|
|
data.tbl = cr->idkey_local_storage = rb_id_table_create(2);
|
|
|
|
}
|
|
|
|
else if (rb_id_table_lookup(data.tbl, data.id, &val)) {
|
|
|
|
// already set
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!cr->local_storage_store_lock) {
|
|
|
|
cr->local_storage_store_lock = rb_mutex_new();
|
|
|
|
}
|
|
|
|
|
|
|
|
return rb_mutex_synchronize(cr->local_storage_store_lock, ractor_local_value_store_i, (VALUE)&data);
|
|
|
|
}
|
|
|
|
|
2024-11-05 04:54:06 +09:00
|
|
|
// Ractor::Channel (emulate with Ractor)
|
|
|
|
|
|
|
|
typedef rb_ractor_t rb_ractor_channel_t;
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_channel_func(RB_BLOCK_CALL_FUNC_ARGLIST(y, c))
|
|
|
|
{
|
|
|
|
rb_execution_context_t *ec = GET_EC();
|
|
|
|
rb_ractor_t *cr = rb_ec_ractor_ptr(ec);
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
int state;
|
|
|
|
|
|
|
|
EC_PUSH_TAG(ec);
|
|
|
|
if ((state = EC_EXEC_TAG()) == TAG_NONE) {
|
|
|
|
VALUE obj = ractor_receive(ec, cr);
|
|
|
|
ractor_yield(ec, cr, obj, Qfalse);
|
|
|
|
}
|
|
|
|
EC_POP_TAG();
|
|
|
|
|
|
|
|
if (state) {
|
|
|
|
// ignore the error
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
rb_ractor_channel_new(void)
|
|
|
|
{
|
|
|
|
#if 0
|
|
|
|
return rb_funcall(rb_const_get(rb_cRactor, rb_intern("Channel")), rb_intern("new"), 0);
|
|
|
|
#else
|
|
|
|
// class Channel
|
|
|
|
// def self.new
|
|
|
|
// Ractor.new do # func body
|
|
|
|
// while true
|
|
|
|
// obj = Ractor.receive
|
|
|
|
// Ractor.yield obj
|
|
|
|
// end
|
|
|
|
// rescue Ractor::ClosedError
|
|
|
|
// nil
|
|
|
|
// end
|
|
|
|
// end
|
|
|
|
// end
|
|
|
|
|
|
|
|
return ractor_create_func(rb_cRactor, Qnil, rb_str_new2("Ractor/channel"), rb_ary_new(), ractor_channel_func);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
rb_ractor_channel_yield(rb_execution_context_t *ec, VALUE vch, VALUE obj)
|
|
|
|
{
|
|
|
|
VM_ASSERT(ec == rb_current_ec_noinline());
|
|
|
|
rb_ractor_channel_t *ch = RACTOR_PTR(vch);
|
|
|
|
|
|
|
|
ractor_send(ec, (rb_ractor_t *)ch, obj, Qfalse);
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
rb_ractor_channel_take(rb_execution_context_t *ec, VALUE vch)
|
|
|
|
{
|
|
|
|
VM_ASSERT(ec == rb_current_ec_noinline());
|
|
|
|
rb_ractor_channel_t *ch = RACTOR_PTR(vch);
|
|
|
|
|
|
|
|
return ractor_take(ec, (rb_ractor_t *)ch);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
rb_ractor_channel_close(rb_execution_context_t *ec, VALUE vch)
|
|
|
|
{
|
|
|
|
VM_ASSERT(ec == rb_current_ec_noinline());
|
|
|
|
rb_ractor_channel_t *ch = RACTOR_PTR(vch);
|
|
|
|
|
|
|
|
ractor_close_incoming(ec, (rb_ractor_t *)ch);
|
|
|
|
return ractor_close_outgoing(ec, (rb_ractor_t *)ch);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ractor#require
|
|
|
|
|
|
|
|
struct cross_ractor_require {
|
|
|
|
VALUE ch;
|
|
|
|
VALUE result;
|
|
|
|
VALUE exception;
|
|
|
|
|
|
|
|
// require
|
|
|
|
VALUE feature;
|
|
|
|
|
|
|
|
// autoload
|
|
|
|
VALUE module;
|
|
|
|
ID name;
|
|
|
|
};
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
require_body(VALUE data)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
|
|
|
|
ID require;
|
|
|
|
CONST_ID(require, "require");
|
|
|
|
crr->result = rb_funcallv(Qnil, require, 1, &crr->feature);
|
|
|
|
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
require_rescue(VALUE data, VALUE errinfo)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
crr->exception = errinfo;
|
|
|
|
return Qundef;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
require_result_copy_body(VALUE data)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
|
|
|
|
if (crr->exception != Qundef) {
|
|
|
|
VM_ASSERT(crr->result == Qundef);
|
|
|
|
crr->exception = ractor_copy(crr->exception);
|
|
|
|
}
|
|
|
|
else{
|
|
|
|
VM_ASSERT(crr->result != Qundef);
|
|
|
|
crr->result = ractor_copy(crr->result);
|
|
|
|
}
|
|
|
|
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
require_result_copy_resuce(VALUE data, VALUE errinfo)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
crr->exception = errinfo; // ractor_move(crr->exception);
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_require_protect(struct cross_ractor_require *crr, VALUE (*func)(VALUE))
|
|
|
|
{
|
|
|
|
// catch any error
|
|
|
|
rb_rescue2(func, (VALUE)crr,
|
|
|
|
require_rescue, (VALUE)crr, rb_eException, 0);
|
|
|
|
|
|
|
|
rb_rescue2(require_result_copy_body, (VALUE)crr,
|
|
|
|
require_result_copy_resuce, (VALUE)crr, rb_eException, 0);
|
|
|
|
|
|
|
|
rb_ractor_channel_yield(GET_EC(), crr->ch, Qtrue);
|
|
|
|
return Qnil;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractore_require_func(void *data)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
return ractor_require_protect(crr, require_body);
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_require(VALUE feature)
|
|
|
|
{
|
|
|
|
// TODO: make feature shareable
|
|
|
|
struct cross_ractor_require crr = {
|
|
|
|
.feature = feature, // TODO: ractor
|
|
|
|
.ch = rb_ractor_channel_new(),
|
|
|
|
.result = Qundef,
|
|
|
|
.exception = Qundef,
|
|
|
|
};
|
|
|
|
|
|
|
|
rb_execution_context_t *ec = GET_EC();
|
|
|
|
rb_ractor_t *main_r = GET_VM()->ractor.main_ractor;
|
|
|
|
rb_ractor_interrupt_exec(main_r, ractore_require_func, &crr, 0);
|
|
|
|
|
|
|
|
// wait for require done
|
|
|
|
rb_ractor_channel_take(ec, crr.ch);
|
|
|
|
rb_ractor_channel_close(ec, crr.ch);
|
|
|
|
|
|
|
|
if (crr.exception != Qundef) {
|
2025-05-23 11:12:14 -04:00
|
|
|
ractor_reset_belonging(crr.exception);
|
2024-11-05 04:54:06 +09:00
|
|
|
rb_exc_raise(crr.exception);
|
|
|
|
}
|
|
|
|
else {
|
2025-05-23 11:12:14 -04:00
|
|
|
RUBY_ASSERT(crr.result != Qundef);
|
|
|
|
ractor_reset_belonging(crr.result);
|
2024-11-05 04:54:06 +09:00
|
|
|
return crr.result;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_require(rb_execution_context_t *ec, VALUE self, VALUE feature)
|
|
|
|
{
|
|
|
|
return rb_ractor_require(feature);
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
autoload_load_body(VALUE data)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
crr->result = rb_autoload_load(crr->module, crr->name);
|
|
|
|
return Qnil;
|
|
|
|
}
|
|
|
|
|
|
|
|
static VALUE
|
|
|
|
ractor_autoload_load_func(void *data)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require *crr = (struct cross_ractor_require *)data;
|
|
|
|
return ractor_require_protect(crr, autoload_load_body);
|
|
|
|
}
|
|
|
|
|
|
|
|
VALUE
|
|
|
|
rb_ractor_autoload_load(VALUE module, ID name)
|
|
|
|
{
|
|
|
|
struct cross_ractor_require crr = {
|
|
|
|
.module = module,
|
|
|
|
.name = name,
|
|
|
|
.ch = rb_ractor_channel_new(),
|
|
|
|
.result = Qundef,
|
|
|
|
.exception = Qundef,
|
|
|
|
};
|
|
|
|
|
|
|
|
rb_execution_context_t *ec = GET_EC();
|
|
|
|
rb_ractor_t *main_r = GET_VM()->ractor.main_ractor;
|
|
|
|
rb_ractor_interrupt_exec(main_r, ractor_autoload_load_func, &crr, 0);
|
|
|
|
|
|
|
|
// wait for require done
|
|
|
|
rb_ractor_channel_take(ec, crr.ch);
|
|
|
|
rb_ractor_channel_close(ec, crr.ch);
|
|
|
|
|
|
|
|
if (crr.exception != Qundef) {
|
|
|
|
rb_exc_raise(crr.exception);
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
return crr.result;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Ractor.make_shareable(obj)
Introduce new method Ractor.make_shareable(obj) which tries to make
obj shareable object. Protocol is here.
(1) If obj is shareable, it is shareable.
(2) If obj is not a shareable object and if obj can be shareable
object if it is frozen, then freeze obj. If obj has reachable
objects (rs), do rs.each{|o| Ractor.make_shareable(o)}
recursively (recursion is not Ruby-level, but C-level).
(3) Otherwise, raise Ractor::Error. Now T_DATA is not a shareable
object even if the object is frozen.
If the method finished without error, given obj is marked as
a sharable object.
To allow makng a shareable frozen T_DATA object, then set
`RUBY_TYPED_FROZEN_SHAREABLE` as type->flags. On default,
this flag is not set. It means user defined T_DATA objects are
not allowed to become shareable objects when it is frozen.
You can make any object shareable by setting FL_SHAREABLE flag,
so if you know that the T_DATA object is shareable (== thread-safe),
set this flag, at creation time for example. `Ractor` object is one
example, which is not a frozen, but a shareable object.
2020-10-21 00:54:03 +09:00
|
|
|
#include "ractor.rbinc"
|