2017-01-03 13:16:48 -08:00
|
|
|
// Copyright Joyent, Inc. and other Node contributors.
|
|
|
|
//
|
|
|
|
// Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
// copy of this software and associated documentation files (the
|
|
|
|
// "Software"), to deal in the Software without restriction, including
|
|
|
|
// without limitation the rights to use, copy, modify, merge, publish,
|
|
|
|
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
|
|
|
// persons to whom the Software is furnished to do so, subject to the
|
|
|
|
// following conditions:
|
|
|
|
//
|
|
|
|
// The above copyright notice and this permission notice shall be included
|
|
|
|
// in all copies or substantial portions of the Software.
|
|
|
|
//
|
|
|
|
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
|
|
|
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
|
|
|
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
|
|
|
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
|
|
|
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
|
|
|
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
|
2014-11-22 16:59:48 +01:00
|
|
|
'use strict';
|
|
|
|
|
2019-11-19 20:22:12 +08:00
|
|
|
const {
|
2021-04-16 22:47:20 +05:30
|
|
|
ArrayPrototypeJoin,
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
ArrayPrototypePop,
|
|
|
|
ArrayPrototypePush,
|
2020-11-18 00:31:30 +01:00
|
|
|
ArrayPrototypeSlice,
|
2021-04-16 22:47:20 +05:30
|
|
|
ArrayPrototypeSplice,
|
2021-12-22 16:56:57 -08:00
|
|
|
ArrayPrototypeUnshift,
|
2019-05-19 13:55:18 +02:00
|
|
|
Boolean,
|
2020-01-02 19:19:02 +01:00
|
|
|
Error,
|
2020-10-27 20:41:34 +01:00
|
|
|
ErrorCaptureStackTrace,
|
2021-04-16 22:47:20 +05:30
|
|
|
FunctionPrototypeBind,
|
|
|
|
FunctionPrototypeCall,
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
NumberMAX_SAFE_INTEGER,
|
2020-11-06 08:09:42 -08:00
|
|
|
ObjectDefineProperties,
|
2024-04-21 18:53:08 +02:00
|
|
|
ObjectDefineProperty,
|
2019-11-22 18:04:46 +01:00
|
|
|
ObjectGetPrototypeOf,
|
2019-05-30 17:58:55 +02:00
|
|
|
ObjectSetPrototypeOf,
|
2019-12-13 16:46:35 +01:00
|
|
|
Promise,
|
2019-05-30 17:58:55 +02:00
|
|
|
PromiseReject,
|
|
|
|
PromiseResolve,
|
2021-12-22 16:56:57 -08:00
|
|
|
ReflectApply,
|
2019-11-22 18:04:46 +01:00
|
|
|
ReflectOwnKeys,
|
2020-01-06 03:55:31 +01:00
|
|
|
String,
|
2021-04-16 22:47:20 +05:30
|
|
|
StringPrototypeSplit,
|
2019-11-30 16:55:29 +01:00
|
|
|
Symbol,
|
2021-04-16 22:47:20 +05:30
|
|
|
SymbolAsyncIterator,
|
2024-10-07 11:47:44 +02:00
|
|
|
SymbolDispose,
|
2024-04-21 18:53:08 +02:00
|
|
|
SymbolFor,
|
2019-11-19 20:22:12 +08:00
|
|
|
} = primordials;
|
2019-12-08 18:33:33 +01:00
|
|
|
const kRejection = SymbolFor('nodejs.rejection');
|
2022-05-21 17:51:52 +08:00
|
|
|
|
2024-10-07 11:47:44 +02:00
|
|
|
const { kEmptyObject, spliceOne } = require('internal/util');
|
2022-05-21 17:51:52 +08:00
|
|
|
|
2022-06-25 12:31:23 +02:00
|
|
|
const {
|
|
|
|
inspect,
|
|
|
|
identicalSequenceRange,
|
|
|
|
} = require('internal/util/inspect');
|
2019-03-31 13:30:12 +02:00
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
let FixedQueue;
|
|
|
|
let kFirstEventParam;
|
2023-06-22 16:38:02 +03:00
|
|
|
let kResistStopPropagation;
|
2010-04-22 17:22:03 -07:00
|
|
|
|
2019-06-20 07:13:26 +08:00
|
|
|
const {
|
2021-03-11 07:59:53 -08:00
|
|
|
AbortError,
|
|
|
|
codes: {
|
|
|
|
ERR_INVALID_ARG_TYPE,
|
2023-02-24 09:45:04 +01:00
|
|
|
ERR_UNHANDLED_ERROR,
|
2021-03-11 07:59:53 -08:00
|
|
|
},
|
2022-02-08 17:56:15 -08:00
|
|
|
genericNodeError,
|
2024-04-23 19:05:38 +02:00
|
|
|
kEnhanceStackBeforeInspector,
|
2019-06-20 07:13:26 +08:00
|
|
|
} = require('internal/errors');
|
2019-03-19 18:15:44 +08:00
|
|
|
|
2020-08-24 13:11:23 -07:00
|
|
|
const {
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
validateInteger,
|
2021-01-24 15:46:24 +08:00
|
|
|
validateAbortSignal,
|
2021-05-16 19:22:48 +08:00
|
|
|
validateBoolean,
|
2021-01-24 15:46:24 +08:00
|
|
|
validateFunction,
|
2023-01-22 15:50:37 +09:00
|
|
|
validateNumber,
|
2023-09-29 19:56:20 +09:00
|
|
|
validateObject,
|
2021-12-22 16:56:57 -08:00
|
|
|
validateString,
|
2020-08-24 13:11:23 -07:00
|
|
|
} = require('internal/validators');
|
2024-03-14 10:52:31 +02:00
|
|
|
const { addAbortListener } = require('internal/events/abort_listener');
|
2020-08-24 13:11:23 -07:00
|
|
|
|
2019-05-19 13:55:18 +02:00
|
|
|
const kCapture = Symbol('kCapture');
|
2019-12-12 21:08:42 +01:00
|
|
|
const kErrorMonitor = Symbol('events.errorMonitor');
|
2023-10-29 17:29:59 +01:00
|
|
|
const kShapeMode = Symbol('shapeMode');
|
2020-11-06 08:09:42 -08:00
|
|
|
const kMaxEventTargetListeners = Symbol('events.maxEventTargetListeners');
|
|
|
|
const kMaxEventTargetListenersWarned =
|
|
|
|
Symbol('events.maxEventTargetListenersWarned');
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
const kWatermarkData = SymbolFor('nodejs.watermarkData');
|
2019-05-19 13:55:18 +02:00
|
|
|
|
2021-12-22 16:56:57 -08:00
|
|
|
let EventEmitterAsyncResource;
|
|
|
|
// The EventEmitterAsyncResource has to be initialized lazily because event.js
|
|
|
|
// is loaded so early in the bootstrap process, before async_hooks is available.
|
|
|
|
//
|
|
|
|
// This implementation was adapted straight from addaleax's
|
|
|
|
// eventemitter-asyncresource MIT-licensed userland module.
|
|
|
|
// https://github.com/addaleax/eventemitter-asyncresource
|
|
|
|
function lazyEventEmitterAsyncResource() {
|
|
|
|
if (EventEmitterAsyncResource === undefined) {
|
|
|
|
const {
|
2023-02-24 09:45:04 +01:00
|
|
|
AsyncResource,
|
2021-12-22 16:56:57 -08:00
|
|
|
} = require('async_hooks');
|
|
|
|
|
|
|
|
class EventEmitterReferencingAsyncResource extends AsyncResource {
|
2024-09-16 23:55:21 -04:00
|
|
|
#eventEmitter;
|
|
|
|
|
2021-12-22 16:56:57 -08:00
|
|
|
/**
|
|
|
|
* @param {EventEmitter} ee
|
|
|
|
* @param {string} [type]
|
|
|
|
* @param {{
|
|
|
|
* triggerAsyncId?: number,
|
|
|
|
* requireManualDestroy?: boolean,
|
|
|
|
* }} [options]
|
|
|
|
*/
|
|
|
|
constructor(ee, type, options) {
|
|
|
|
super(type, options);
|
2024-09-16 23:55:21 -04:00
|
|
|
this.#eventEmitter = ee;
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @type {EventEmitter}
|
|
|
|
*/
|
|
|
|
get eventEmitter() {
|
2024-09-16 23:55:21 -04:00
|
|
|
return this.#eventEmitter;
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
EventEmitterAsyncResource =
|
|
|
|
class EventEmitterAsyncResource extends EventEmitter {
|
2024-09-16 23:55:21 -04:00
|
|
|
#asyncResource;
|
|
|
|
|
2021-12-22 16:56:57 -08:00
|
|
|
/**
|
|
|
|
* @param {{
|
|
|
|
* name?: string,
|
|
|
|
* triggerAsyncId?: number,
|
|
|
|
* requireManualDestroy?: boolean,
|
|
|
|
* }} [options]
|
|
|
|
*/
|
|
|
|
constructor(options = undefined) {
|
|
|
|
let name;
|
|
|
|
if (typeof options === 'string') {
|
|
|
|
name = options;
|
|
|
|
options = undefined;
|
|
|
|
} else {
|
|
|
|
if (new.target === EventEmitterAsyncResource) {
|
|
|
|
validateString(options?.name, 'options.name');
|
|
|
|
}
|
|
|
|
name = options?.name || new.target.name;
|
|
|
|
}
|
|
|
|
super(options);
|
|
|
|
|
2024-09-16 23:55:21 -04:00
|
|
|
this.#asyncResource = new EventEmitterReferencingAsyncResource(this, name, options);
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @param {symbol,string} event
|
|
|
|
* @param {...any} args
|
|
|
|
* @returns {boolean}
|
|
|
|
*/
|
|
|
|
emit(event, ...args) {
|
2024-09-16 23:55:21 -04:00
|
|
|
const asyncResource = this.#asyncResource;
|
2021-12-22 16:56:57 -08:00
|
|
|
ArrayPrototypeUnshift(args, super.emit, this, event);
|
|
|
|
return ReflectApply(asyncResource.runInAsyncScope, asyncResource,
|
|
|
|
args);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @returns {void}
|
|
|
|
*/
|
|
|
|
emitDestroy() {
|
2024-09-16 23:55:21 -04:00
|
|
|
this.#asyncResource.emitDestroy();
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @type {number}
|
|
|
|
*/
|
|
|
|
get asyncId() {
|
2024-09-16 23:55:21 -04:00
|
|
|
return this.#asyncResource.asyncId();
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @type {number}
|
|
|
|
*/
|
|
|
|
get triggerAsyncId() {
|
2024-09-16 23:55:21 -04:00
|
|
|
return this.#asyncResource.triggerAsyncId();
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @type {EventEmitterReferencingAsyncResource}
|
|
|
|
*/
|
|
|
|
get asyncResource() {
|
2024-09-16 23:55:21 -04:00
|
|
|
return this.#asyncResource;
|
2021-12-22 16:56:57 -08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
return EventEmitterAsyncResource;
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Creates a new `EventEmitter` instance.
|
|
|
|
* @param {{ captureRejections?: boolean; }} [opts]
|
2021-12-04 19:59:24 -08:00
|
|
|
* @constructs {EventEmitter}
|
2021-05-18 03:54:16 +04:30
|
|
|
*/
|
2019-05-19 13:55:18 +02:00
|
|
|
function EventEmitter(opts) {
|
2021-04-15 15:05:42 +02:00
|
|
|
EventEmitter.init.call(this, opts);
|
2012-04-06 16:26:18 -07:00
|
|
|
}
|
2013-05-13 13:04:06 -06:00
|
|
|
module.exports = EventEmitter;
|
2023-07-06 01:04:23 +03:00
|
|
|
module.exports.addAbortListener = addAbortListener;
|
2019-02-13 13:30:28 +01:00
|
|
|
module.exports.once = once;
|
2019-05-30 17:58:55 +02:00
|
|
|
module.exports.on = on;
|
2020-11-06 12:05:08 +02:00
|
|
|
module.exports.getEventListeners = getEventListeners;
|
2023-03-19 08:58:19 -04:00
|
|
|
module.exports.getMaxListeners = getMaxListeners;
|
2013-05-13 13:04:06 -06:00
|
|
|
// Backwards-compat with node 0.10.x
|
|
|
|
EventEmitter.EventEmitter = EventEmitter;
|
|
|
|
|
2018-02-22 16:07:06 -05:00
|
|
|
EventEmitter.usingDomains = false;
|
|
|
|
|
2019-05-19 13:55:18 +02:00
|
|
|
EventEmitter.captureRejectionSymbol = kRejection;
|
|
|
|
ObjectDefineProperty(EventEmitter, 'captureRejections', {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2019-05-19 13:55:18 +02:00
|
|
|
get() {
|
|
|
|
return EventEmitter.prototype[kCapture];
|
|
|
|
},
|
|
|
|
set(value) {
|
2021-05-16 19:22:48 +08:00
|
|
|
validateBoolean(value, 'EventEmitter.captureRejections');
|
2019-05-19 13:55:18 +02:00
|
|
|
|
|
|
|
EventEmitter.prototype[kCapture] = value;
|
|
|
|
},
|
2023-02-24 09:45:04 +01:00
|
|
|
enumerable: true,
|
2019-05-19 13:55:18 +02:00
|
|
|
});
|
|
|
|
|
2021-12-22 16:56:57 -08:00
|
|
|
ObjectDefineProperty(EventEmitter, 'EventEmitterAsyncResource', {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2021-12-22 16:56:57 -08:00
|
|
|
enumerable: true,
|
|
|
|
get: lazyEventEmitterAsyncResource,
|
|
|
|
set: undefined,
|
|
|
|
configurable: true,
|
|
|
|
});
|
|
|
|
|
2020-02-18 15:33:31 +01:00
|
|
|
EventEmitter.errorMonitor = kErrorMonitor;
|
2019-12-12 21:08:42 +01:00
|
|
|
|
2019-05-19 13:55:18 +02:00
|
|
|
// The default for captureRejections is false
|
|
|
|
ObjectDefineProperty(EventEmitter.prototype, kCapture, {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2019-05-19 13:55:18 +02:00
|
|
|
value: false,
|
|
|
|
writable: true,
|
2023-02-24 09:45:04 +01:00
|
|
|
enumerable: false,
|
2019-05-19 13:55:18 +02:00
|
|
|
});
|
|
|
|
|
2013-05-26 05:03:02 +09:00
|
|
|
EventEmitter.prototype._events = undefined;
|
2017-11-25 13:26:28 -05:00
|
|
|
EventEmitter.prototype._eventsCount = 0;
|
2013-05-26 05:03:02 +09:00
|
|
|
EventEmitter.prototype._maxListeners = undefined;
|
2012-05-04 17:45:27 +02:00
|
|
|
|
|
|
|
// By default EventEmitters will print a warning if more than 10 listeners are
|
|
|
|
// added to it. This is a useful default which helps finding memory leaks.
|
2019-11-12 16:03:30 +01:00
|
|
|
let defaultMaxListeners = 10;
|
2020-11-06 08:09:42 -08:00
|
|
|
let isEventTarget;
|
2015-12-30 00:20:33 -08:00
|
|
|
|
2018-11-11 19:31:22 +08:00
|
|
|
function checkListener(listener) {
|
2021-01-24 15:46:24 +08:00
|
|
|
validateFunction(listener, 'listener');
|
2018-11-11 19:31:22 +08:00
|
|
|
}
|
|
|
|
|
2019-11-19 20:22:12 +08:00
|
|
|
ObjectDefineProperty(EventEmitter, 'defaultMaxListeners', {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2015-12-30 00:20:33 -08:00
|
|
|
enumerable: true,
|
|
|
|
get: function() {
|
|
|
|
return defaultMaxListeners;
|
|
|
|
},
|
|
|
|
set: function(arg) {
|
2023-01-22 15:50:37 +09:00
|
|
|
validateNumber(arg, 'defaultMaxListeners', 0);
|
2015-12-30 00:20:33 -08:00
|
|
|
defaultMaxListeners = arg;
|
2023-02-24 09:45:04 +01:00
|
|
|
},
|
2015-12-30 00:20:33 -08:00
|
|
|
});
|
2012-05-04 17:45:27 +02:00
|
|
|
|
2020-11-06 08:09:42 -08:00
|
|
|
ObjectDefineProperties(EventEmitter, {
|
|
|
|
kMaxEventTargetListeners: {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2020-11-06 08:09:42 -08:00
|
|
|
value: kMaxEventTargetListeners,
|
|
|
|
enumerable: false,
|
|
|
|
configurable: false,
|
|
|
|
writable: false,
|
|
|
|
},
|
|
|
|
kMaxEventTargetListenersWarned: {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2020-11-06 08:09:42 -08:00
|
|
|
value: kMaxEventTargetListenersWarned,
|
|
|
|
enumerable: false,
|
|
|
|
configurable: false,
|
|
|
|
writable: false,
|
2023-02-24 09:45:04 +01:00
|
|
|
},
|
2020-11-06 08:09:42 -08:00
|
|
|
});
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Sets the max listeners.
|
|
|
|
* @param {number} n
|
|
|
|
* @param {EventTarget[] | EventEmitter[]} [eventTargets]
|
|
|
|
* @returns {void}
|
|
|
|
*/
|
2020-11-06 08:09:42 -08:00
|
|
|
EventEmitter.setMaxListeners =
|
|
|
|
function(n = defaultMaxListeners, ...eventTargets) {
|
2023-01-22 15:50:37 +09:00
|
|
|
validateNumber(n, 'setMaxListeners', 0);
|
2020-11-06 08:09:42 -08:00
|
|
|
if (eventTargets.length === 0) {
|
|
|
|
defaultMaxListeners = n;
|
|
|
|
} else {
|
|
|
|
if (isEventTarget === undefined)
|
|
|
|
isEventTarget = require('internal/event_target').isEventTarget;
|
|
|
|
|
2021-04-15 15:05:42 +02:00
|
|
|
for (let i = 0; i < eventTargets.length; i++) {
|
|
|
|
const target = eventTargets[i];
|
2020-11-06 08:09:42 -08:00
|
|
|
if (isEventTarget(target)) {
|
|
|
|
target[kMaxEventTargetListeners] = n;
|
|
|
|
target[kMaxEventTargetListenersWarned] = false;
|
|
|
|
} else if (typeof target.setMaxListeners === 'function') {
|
|
|
|
target.setMaxListeners(n);
|
|
|
|
} else {
|
|
|
|
throw new ERR_INVALID_ARG_TYPE(
|
|
|
|
'eventTargets',
|
|
|
|
['EventEmitter', 'EventTarget'],
|
|
|
|
target);
|
|
|
|
}
|
2021-04-15 15:05:42 +02:00
|
|
|
}
|
2020-11-06 08:09:42 -08:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2022-01-19 19:26:38 +08:00
|
|
|
// If you're updating this function definition, please also update any
|
|
|
|
// re-definitions, such as the one in the Domain module (lib/domain.js).
|
2019-05-19 13:55:18 +02:00
|
|
|
EventEmitter.init = function(opts) {
|
2014-04-04 02:11:56 +04:00
|
|
|
|
2017-10-14 20:49:01 -04:00
|
|
|
if (this._events === undefined ||
|
2019-11-19 20:22:12 +08:00
|
|
|
this._events === ObjectGetPrototypeOf(this)._events) {
|
2023-01-09 21:38:36 -08:00
|
|
|
this._events = { __proto__: null };
|
2015-02-11 17:00:12 -05:00
|
|
|
this._eventsCount = 0;
|
2023-10-29 17:29:59 +01:00
|
|
|
this[kShapeMode] = false;
|
|
|
|
} else {
|
|
|
|
this[kShapeMode] = true;
|
2015-02-11 17:00:12 -05:00
|
|
|
}
|
2014-04-04 02:11:56 +04:00
|
|
|
|
2024-10-09 02:42:16 -04:00
|
|
|
this._maxListeners ||= undefined;
|
2019-05-19 13:55:18 +02:00
|
|
|
|
|
|
|
|
2021-01-04 13:50:51 +08:00
|
|
|
if (opts?.captureRejections) {
|
2021-05-16 19:22:48 +08:00
|
|
|
validateBoolean(opts.captureRejections, 'options.captureRejections');
|
2019-05-19 13:55:18 +02:00
|
|
|
this[kCapture] = Boolean(opts.captureRejections);
|
|
|
|
} else {
|
2019-12-19 09:33:52 -05:00
|
|
|
// Assigning the kCapture property directly saves an expensive
|
|
|
|
// prototype lookup in a very sensitive hot path.
|
2019-05-19 13:55:18 +02:00
|
|
|
this[kCapture] = EventEmitter.prototype[kCapture];
|
|
|
|
}
|
2013-12-12 14:59:40 -08:00
|
|
|
};
|
2012-05-04 17:45:27 +02:00
|
|
|
|
2019-05-19 13:55:18 +02:00
|
|
|
function addCatch(that, promise, type, args) {
|
|
|
|
if (!that[kCapture]) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Handle Promises/A+ spec, then could be a getter
|
|
|
|
// that throws on second use.
|
|
|
|
try {
|
|
|
|
const then = promise.then;
|
|
|
|
|
|
|
|
if (typeof then === 'function') {
|
2021-04-15 15:05:42 +02:00
|
|
|
then.call(promise, undefined, function(err) {
|
2019-05-19 13:55:18 +02:00
|
|
|
// The callback is called with nextTick to avoid a follow-up
|
|
|
|
// rejection from this promise.
|
|
|
|
process.nextTick(emitUnhandledRejectionOrErr, that, err, type, args);
|
|
|
|
});
|
|
|
|
}
|
|
|
|
} catch (err) {
|
|
|
|
that.emit('error', err);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
function emitUnhandledRejectionOrErr(ee, err, type, args) {
|
|
|
|
if (typeof ee[kRejection] === 'function') {
|
|
|
|
ee[kRejection](err, type, ...args);
|
|
|
|
} else {
|
|
|
|
// We have to disable the capture rejections mechanism, otherwise
|
|
|
|
// we might end up in an infinite loop.
|
|
|
|
const prev = ee[kCapture];
|
|
|
|
|
2021-11-13 07:26:20 +09:00
|
|
|
// If the error handler throws, it is not catchable and it
|
2019-05-19 13:55:18 +02:00
|
|
|
// will end up in 'uncaughtException'. We restore the previous
|
|
|
|
// value of kCapture in case the uncaughtException is present
|
|
|
|
// and the exception is handled.
|
|
|
|
try {
|
|
|
|
ee[kCapture] = false;
|
|
|
|
ee.emit('error', err);
|
|
|
|
} finally {
|
|
|
|
ee[kCapture] = prev;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Increases the max listeners of the event emitter.
|
|
|
|
* @param {number} n
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2014-05-05 16:48:51 +02:00
|
|
|
EventEmitter.prototype.setMaxListeners = function setMaxListeners(n) {
|
2023-01-22 15:50:37 +09:00
|
|
|
validateNumber(n, 'setMaxListeners', 0);
|
2011-10-23 14:19:45 +03:00
|
|
|
this._maxListeners = n;
|
2013-04-30 15:26:14 -07:00
|
|
|
return this;
|
2010-12-31 18:32:52 -08:00
|
|
|
};
|
|
|
|
|
2019-01-20 18:41:25 +08:00
|
|
|
function _getMaxListeners(that) {
|
2015-01-28 20:05:53 -05:00
|
|
|
if (that._maxListeners === undefined)
|
2014-12-05 14:50:05 +01:00
|
|
|
return EventEmitter.defaultMaxListeners;
|
2015-01-20 19:30:50 +01:00
|
|
|
return that._maxListeners;
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns the current max listener value for the event emitter.
|
|
|
|
* @returns {number}
|
|
|
|
*/
|
2015-01-20 19:30:50 +01:00
|
|
|
EventEmitter.prototype.getMaxListeners = function getMaxListeners() {
|
2019-01-20 18:41:25 +08:00
|
|
|
return _getMaxListeners(this);
|
2014-12-05 14:50:05 +01:00
|
|
|
};
|
|
|
|
|
2018-02-26 15:46:50 +01:00
|
|
|
function enhanceStackTrace(err, own) {
|
2019-08-03 23:29:42 +02:00
|
|
|
let ctorInfo = '';
|
|
|
|
try {
|
|
|
|
const { name } = this.constructor;
|
|
|
|
if (name !== 'EventEmitter')
|
|
|
|
ctorInfo = ` on ${name} instance`;
|
2022-02-02 21:57:11 -08:00
|
|
|
} catch {
|
|
|
|
// Continue regardless of error.
|
|
|
|
}
|
2019-08-03 23:29:42 +02:00
|
|
|
const sep = `\nEmitted 'error' event${ctorInfo} at:\n`;
|
2018-02-26 15:46:50 +01:00
|
|
|
|
2021-04-16 22:47:20 +05:30
|
|
|
const errStack = ArrayPrototypeSlice(
|
|
|
|
StringPrototypeSplit(err.stack, '\n'), 1);
|
|
|
|
const ownStack = ArrayPrototypeSlice(
|
|
|
|
StringPrototypeSplit(own.stack, '\n'), 1);
|
2018-02-26 15:46:50 +01:00
|
|
|
|
2022-06-25 12:31:23 +02:00
|
|
|
const { len, offset } = identicalSequenceRange(ownStack, errStack);
|
2018-02-26 15:46:50 +01:00
|
|
|
if (len > 0) {
|
2022-06-25 12:31:23 +02:00
|
|
|
ArrayPrototypeSplice(ownStack, offset + 1, len - 2,
|
2021-04-16 22:47:20 +05:30
|
|
|
' [... lines matching original stack trace ...]');
|
2018-02-26 15:46:50 +01:00
|
|
|
}
|
2019-06-20 07:13:26 +08:00
|
|
|
|
2021-04-16 22:47:20 +05:30
|
|
|
return err.stack + sep + ArrayPrototypeJoin(ownStack, '\n');
|
2018-02-26 15:46:50 +01:00
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Synchronously calls each of the listeners registered
|
|
|
|
* for the event.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {...any} [args]
|
|
|
|
* @returns {boolean}
|
|
|
|
*/
|
2017-10-15 00:44:47 -04:00
|
|
|
EventEmitter.prototype.emit = function emit(type, ...args) {
|
|
|
|
let doError = (type === 'error');
|
2013-02-26 11:20:19 -08:00
|
|
|
|
2017-10-15 00:44:47 -04:00
|
|
|
const events = this._events;
|
2019-12-12 21:08:42 +01:00
|
|
|
if (events !== undefined) {
|
|
|
|
if (doError && events[kErrorMonitor] !== undefined)
|
|
|
|
this.emit(kErrorMonitor, ...args);
|
2024-10-09 02:42:16 -04:00
|
|
|
doError &&= events.error === undefined;
|
2019-12-12 21:08:42 +01:00
|
|
|
} else if (!doError)
|
2015-02-11 17:00:12 -05:00
|
|
|
return false;
|
2015-02-05 15:35:33 -05:00
|
|
|
|
2010-04-22 17:31:35 -07:00
|
|
|
// If there is no 'error' event listener then throw.
|
2015-02-11 17:00:12 -05:00
|
|
|
if (doError) {
|
2017-10-15 00:44:47 -04:00
|
|
|
let er;
|
|
|
|
if (args.length > 0)
|
|
|
|
er = args[0];
|
2017-11-30 16:40:05 +01:00
|
|
|
if (er instanceof Error) {
|
2018-02-26 15:46:50 +01:00
|
|
|
try {
|
|
|
|
const capture = {};
|
2020-10-27 20:41:34 +01:00
|
|
|
ErrorCaptureStackTrace(capture, EventEmitter.prototype.emit);
|
2019-11-19 20:22:12 +08:00
|
|
|
ObjectDefineProperty(er, kEnhanceStackBeforeInspector, {
|
2022-06-03 10:23:58 +02:00
|
|
|
__proto__: null,
|
2021-04-16 22:47:20 +05:30
|
|
|
value: FunctionPrototypeBind(enhanceStackTrace, this, er, capture),
|
2023-02-24 09:45:04 +01:00
|
|
|
configurable: true,
|
2018-02-26 15:46:50 +01:00
|
|
|
});
|
2022-02-02 21:57:11 -08:00
|
|
|
} catch {
|
|
|
|
// Continue regardless of error.
|
|
|
|
}
|
2018-02-26 15:46:50 +01:00
|
|
|
|
|
|
|
// Note: The comments on the `throw` lines are intentional, they show
|
|
|
|
// up in Node's output if this results in an unhandled exception.
|
2013-10-29 16:35:32 -07:00
|
|
|
throw er; // Unhandled 'error' event
|
2010-04-22 17:22:03 -07:00
|
|
|
}
|
2019-01-21 20:45:55 +01:00
|
|
|
|
|
|
|
let stringifiedEr;
|
|
|
|
try {
|
|
|
|
stringifiedEr = inspect(er);
|
|
|
|
} catch {
|
|
|
|
stringifiedEr = er;
|
|
|
|
}
|
|
|
|
|
2017-11-30 16:40:05 +01:00
|
|
|
// At least give some kind of context to the user
|
2019-03-19 18:15:44 +08:00
|
|
|
const err = new ERR_UNHANDLED_ERROR(stringifiedEr);
|
2017-11-30 16:40:05 +01:00
|
|
|
err.context = er;
|
2018-02-26 15:46:50 +01:00
|
|
|
throw err; // Unhandled 'error' event
|
2010-04-22 17:22:03 -07:00
|
|
|
}
|
|
|
|
|
2017-10-15 00:44:47 -04:00
|
|
|
const handler = events[type];
|
2013-02-26 11:20:19 -08:00
|
|
|
|
2017-10-14 20:49:01 -04:00
|
|
|
if (handler === undefined)
|
2013-02-26 11:20:19 -08:00
|
|
|
return false;
|
|
|
|
|
2017-11-07 13:22:49 -05:00
|
|
|
if (typeof handler === 'function') {
|
2021-04-15 15:05:42 +02:00
|
|
|
const result = handler.apply(this, args);
|
2019-05-19 13:55:18 +02:00
|
|
|
|
|
|
|
// We check if result is undefined first because that
|
|
|
|
// is the most common case so we do not pay any perf
|
|
|
|
// penalty
|
|
|
|
if (result !== undefined && result !== null) {
|
|
|
|
addCatch(this, result, type, args);
|
|
|
|
}
|
2017-11-07 13:22:49 -05:00
|
|
|
} else {
|
|
|
|
const len = handler.length;
|
2020-06-06 14:18:46 -04:00
|
|
|
const listeners = arrayClone(handler);
|
2019-11-12 16:03:30 +01:00
|
|
|
for (let i = 0; i < len; ++i) {
|
2021-04-15 15:05:42 +02:00
|
|
|
const result = listeners[i].apply(this, args);
|
2019-05-19 13:55:18 +02:00
|
|
|
|
|
|
|
// We check if result is undefined first because that
|
|
|
|
// is the most common case so we do not pay any perf
|
|
|
|
// penalty.
|
|
|
|
// This code is duplicated because extracting it away
|
|
|
|
// would make it non-inlineable.
|
|
|
|
if (result !== undefined && result !== null) {
|
|
|
|
addCatch(this, result, type, args);
|
|
|
|
}
|
|
|
|
}
|
2010-04-22 17:22:03 -07:00
|
|
|
}
|
2013-02-26 11:20:19 -08:00
|
|
|
|
|
|
|
return true;
|
2010-04-22 17:22:03 -07:00
|
|
|
};
|
|
|
|
|
2016-04-03 19:55:22 -07:00
|
|
|
function _addListener(target, type, listener, prepend) {
|
2019-11-12 16:03:30 +01:00
|
|
|
let m;
|
|
|
|
let events;
|
|
|
|
let existing;
|
2013-02-26 10:20:44 -08:00
|
|
|
|
2018-11-11 19:31:22 +08:00
|
|
|
checkListener(listener);
|
2015-02-05 15:35:33 -05:00
|
|
|
|
2016-04-03 19:55:22 -07:00
|
|
|
events = target._events;
|
2017-10-14 20:49:01 -04:00
|
|
|
if (events === undefined) {
|
2023-01-09 21:38:36 -08:00
|
|
|
events = target._events = { __proto__: null };
|
2016-04-03 19:55:22 -07:00
|
|
|
target._eventsCount = 0;
|
2015-02-11 17:00:12 -05:00
|
|
|
} else {
|
2015-02-05 15:35:33 -05:00
|
|
|
// To avoid recursion in the case that type === "newListener"! Before
|
|
|
|
// adding it to the listeners, first emit "newListener".
|
2017-10-14 20:49:01 -04:00
|
|
|
if (events.newListener !== undefined) {
|
2016-04-03 19:55:22 -07:00
|
|
|
target.emit('newListener', type,
|
2021-04-21 09:00:12 +04:30
|
|
|
listener.listener ?? listener);
|
2015-02-11 17:00:12 -05:00
|
|
|
|
|
|
|
// Re-assign `events` because a newListener handler could have caused the
|
|
|
|
// this._events to be assigned to a new object
|
2016-04-03 19:55:22 -07:00
|
|
|
events = target._events;
|
2015-02-05 15:35:33 -05:00
|
|
|
}
|
|
|
|
existing = events[type];
|
|
|
|
}
|
2010-04-20 18:13:07 -07:00
|
|
|
|
2017-10-14 20:49:01 -04:00
|
|
|
if (existing === undefined) {
|
2010-04-20 18:13:07 -07:00
|
|
|
// Optimize the case of one listener. Don't need the extra array object.
|
2018-09-25 15:41:55 -05:00
|
|
|
events[type] = listener;
|
2016-04-03 19:55:22 -07:00
|
|
|
++target._eventsCount;
|
2015-02-11 17:00:12 -05:00
|
|
|
} else {
|
|
|
|
if (typeof existing === 'function') {
|
|
|
|
// Adding the second element, need to change to array.
|
2017-07-14 21:00:36 -07:00
|
|
|
existing = events[type] =
|
|
|
|
prepend ? [listener, existing] : [existing, listener];
|
2015-02-11 17:00:12 -05:00
|
|
|
// If we've already got an array, just append.
|
2017-10-16 18:37:14 -04:00
|
|
|
} else if (prepend) {
|
|
|
|
existing.unshift(listener);
|
|
|
|
} else {
|
|
|
|
existing.push(listener);
|
2015-02-11 17:00:12 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// Check for listener leak
|
2019-01-20 18:41:25 +08:00
|
|
|
m = _getMaxListeners(target);
|
2018-05-01 13:32:10 +02:00
|
|
|
if (m > 0 && existing.length > m && !existing.warned) {
|
|
|
|
existing.warned = true;
|
|
|
|
// No error code for this since it is a Warning
|
2022-02-08 17:56:15 -08:00
|
|
|
const w = genericNodeError(
|
|
|
|
`Possible EventEmitter memory leak detected. ${existing.length} ${String(type)} listeners ` +
|
2024-05-13 02:16:09 +08:00
|
|
|
`added to ${inspect(target, { depth: -1 })}. MaxListeners is ${m}. Use emitter.setMaxListeners() to increase limit`,
|
2022-02-08 17:56:15 -08:00
|
|
|
{ name: 'MaxListenersExceededWarning', emitter: target, type: type, count: existing.length });
|
2018-05-01 13:32:10 +02:00
|
|
|
process.emitWarning(w);
|
2012-01-09 00:53:17 +09:00
|
|
|
}
|
2010-04-20 18:13:07 -07:00
|
|
|
}
|
|
|
|
|
2016-04-03 19:55:22 -07:00
|
|
|
return target;
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Adds a listener to the event emitter.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {Function} listener
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2016-04-03 19:55:22 -07:00
|
|
|
EventEmitter.prototype.addListener = function addListener(type, listener) {
|
|
|
|
return _addListener(this, type, listener, false);
|
2010-04-20 18:13:07 -07:00
|
|
|
};
|
|
|
|
|
2010-09-15 15:47:28 -07:00
|
|
|
EventEmitter.prototype.on = EventEmitter.prototype.addListener;
|
2010-04-22 17:22:03 -07:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Adds the `listener` function to the beginning of
|
|
|
|
* the listeners array.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {Function} listener
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2016-04-03 19:55:22 -07:00
|
|
|
EventEmitter.prototype.prependListener =
|
|
|
|
function prependListener(type, listener) {
|
|
|
|
return _addListener(this, type, listener, true);
|
|
|
|
};
|
2011-03-15 09:01:24 -04:00
|
|
|
|
2019-08-25 04:21:33 -04:00
|
|
|
function onceWrapper() {
|
2016-12-24 21:58:36 -05:00
|
|
|
if (!this.fired) {
|
2017-06-02 02:09:19 -04:00
|
|
|
this.target.removeListener(this.type, this.wrapFn);
|
2016-12-24 21:58:36 -05:00
|
|
|
this.fired = true;
|
2019-08-25 04:21:33 -04:00
|
|
|
if (arguments.length === 0)
|
|
|
|
return this.listener.call(this.target);
|
|
|
|
return this.listener.apply(this.target, arguments);
|
2013-02-27 00:15:28 -08:00
|
|
|
}
|
2016-12-24 21:58:36 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
function _onceWrap(target, type, listener) {
|
2019-03-26 05:21:27 +01:00
|
|
|
const state = { fired: false, wrapFn: undefined, target, type, listener };
|
|
|
|
const wrapped = onceWrapper.bind(state);
|
2016-12-24 21:58:36 -05:00
|
|
|
wrapped.listener = listener;
|
|
|
|
state.wrapFn = wrapped;
|
|
|
|
return wrapped;
|
2016-04-03 19:55:22 -07:00
|
|
|
}
|
2010-12-07 18:44:21 +02:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Adds a one-time `listener` function to the event emitter.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {Function} listener
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2016-04-03 19:55:22 -07:00
|
|
|
EventEmitter.prototype.once = function once(type, listener) {
|
2018-11-11 19:31:22 +08:00
|
|
|
checkListener(listener);
|
|
|
|
|
2016-04-03 19:55:22 -07:00
|
|
|
this.on(type, _onceWrap(this, type, listener));
|
2010-12-07 18:44:21 +02:00
|
|
|
return this;
|
2010-10-12 23:52:26 +03:00
|
|
|
};
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Adds a one-time `listener` function to the beginning of
|
|
|
|
* the listeners array.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {Function} listener
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2016-04-03 19:55:22 -07:00
|
|
|
EventEmitter.prototype.prependOnceListener =
|
|
|
|
function prependOnceListener(type, listener) {
|
2018-11-11 19:31:22 +08:00
|
|
|
checkListener(listener);
|
|
|
|
|
2016-04-03 19:55:22 -07:00
|
|
|
this.prependListener(type, _onceWrap(this, type, listener));
|
|
|
|
return this;
|
|
|
|
};
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Removes the specified `listener` from the listeners array.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @param {Function} listener
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2014-05-05 16:48:51 +02:00
|
|
|
EventEmitter.prototype.removeListener =
|
|
|
|
function removeListener(type, listener) {
|
2018-11-11 19:31:22 +08:00
|
|
|
checkListener(listener);
|
2015-01-16 02:55:29 +09:00
|
|
|
|
2024-12-17 10:25:04 +02:00
|
|
|
const events = this._events;
|
|
|
|
if (events === undefined)
|
2015-02-05 15:35:33 -05:00
|
|
|
return this;
|
|
|
|
|
2019-03-26 05:21:27 +01:00
|
|
|
const list = events[type];
|
2024-12-17 10:25:04 +02:00
|
|
|
if (list === undefined)
|
|
|
|
return this;
|
2015-01-16 02:55:29 +09:00
|
|
|
|
2016-10-27 17:37:40 -04:00
|
|
|
if (list === listener || list.listener === listener) {
|
2023-10-29 17:29:59 +01:00
|
|
|
this._eventsCount -= 1;
|
|
|
|
|
|
|
|
if (this[kShapeMode]) {
|
|
|
|
events[type] = undefined;
|
|
|
|
} else if (this._eventsCount === 0) {
|
2023-01-09 21:38:36 -08:00
|
|
|
this._events = { __proto__: null };
|
2023-10-29 17:29:59 +01:00
|
|
|
} else {
|
2015-02-11 17:00:12 -05:00
|
|
|
delete events[type];
|
|
|
|
if (events.removeListener)
|
2016-04-29 10:47:06 +08:00
|
|
|
this.emit('removeListener', type, list.listener || listener);
|
2015-02-11 17:00:12 -05:00
|
|
|
}
|
2015-02-05 15:35:33 -05:00
|
|
|
} else if (typeof list !== 'function') {
|
2019-03-26 05:21:27 +01:00
|
|
|
let position = -1;
|
2015-02-11 17:00:12 -05:00
|
|
|
|
2019-11-12 16:03:30 +01:00
|
|
|
for (let i = list.length - 1; i >= 0; i--) {
|
2016-10-27 17:37:40 -04:00
|
|
|
if (list[i] === listener || list[i].listener === listener) {
|
2015-01-16 02:55:29 +09:00
|
|
|
position = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (position < 0)
|
|
|
|
return this;
|
|
|
|
|
2017-04-18 21:01:48 -04:00
|
|
|
if (position === 0)
|
2017-01-02 03:17:21 -05:00
|
|
|
list.shift();
|
2017-10-15 23:59:53 +08:00
|
|
|
else {
|
2015-01-16 02:55:29 +09:00
|
|
|
spliceOne(list, position);
|
2017-10-15 23:59:53 +08:00
|
|
|
}
|
2017-04-18 21:01:48 -04:00
|
|
|
|
|
|
|
if (list.length === 1)
|
|
|
|
events[type] = list[0];
|
2015-01-16 02:55:29 +09:00
|
|
|
|
2017-10-14 20:49:01 -04:00
|
|
|
if (events.removeListener !== undefined)
|
2020-05-28 11:01:12 +08:00
|
|
|
this.emit('removeListener', type, listener);
|
2011-03-18 21:02:14 +01:00
|
|
|
}
|
|
|
|
|
2013-02-20 23:54:38 -08:00
|
|
|
return this;
|
2015-01-16 02:55:29 +09:00
|
|
|
};
|
2010-04-20 18:13:07 -07:00
|
|
|
|
2017-11-20 20:24:15 +02:00
|
|
|
EventEmitter.prototype.off = EventEmitter.prototype.removeListener;
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Removes all listeners from the event emitter. (Only
|
|
|
|
* removes listeners for a specific event name if specified
|
|
|
|
* as `type`).
|
|
|
|
* @param {string | symbol} [type]
|
|
|
|
* @returns {EventEmitter}
|
|
|
|
*/
|
2014-05-05 16:48:51 +02:00
|
|
|
EventEmitter.prototype.removeAllListeners =
|
|
|
|
function removeAllListeners(type) {
|
2024-11-15 08:25:07 +09:00
|
|
|
const events = this._events;
|
2024-12-17 10:25:04 +02:00
|
|
|
if (events === undefined)
|
|
|
|
return this;
|
2015-01-16 02:55:29 +09:00
|
|
|
|
2019-01-21 01:22:27 +01:00
|
|
|
// Not listening for removeListener, no need to emit
|
2017-10-14 20:49:01 -04:00
|
|
|
if (events.removeListener === undefined) {
|
2015-02-11 17:00:12 -05:00
|
|
|
if (arguments.length === 0) {
|
2023-01-09 21:38:36 -08:00
|
|
|
this._events = { __proto__: null };
|
2015-02-11 17:00:12 -05:00
|
|
|
this._eventsCount = 0;
|
2017-10-14 20:49:01 -04:00
|
|
|
} else if (events[type] !== undefined) {
|
2015-02-11 17:00:12 -05:00
|
|
|
if (--this._eventsCount === 0)
|
2023-01-09 21:38:36 -08:00
|
|
|
this._events = { __proto__: null };
|
2015-02-11 17:00:12 -05:00
|
|
|
else
|
|
|
|
delete events[type];
|
|
|
|
}
|
2023-10-29 17:29:59 +01:00
|
|
|
this[kShapeMode] = false;
|
2015-01-16 02:55:29 +09:00
|
|
|
return this;
|
|
|
|
}
|
2012-08-01 01:29:10 +02:00
|
|
|
|
2018-12-10 13:27:32 +01:00
|
|
|
// Emit removeListener for all listeners on all events
|
2015-01-16 02:55:29 +09:00
|
|
|
if (arguments.length === 0) {
|
2020-02-18 22:35:08 +08:00
|
|
|
for (const key of ReflectOwnKeys(events)) {
|
2015-01-16 02:55:29 +09:00
|
|
|
if (key === 'removeListener') continue;
|
|
|
|
this.removeAllListeners(key);
|
|
|
|
}
|
|
|
|
this.removeAllListeners('removeListener');
|
2023-01-09 21:38:36 -08:00
|
|
|
this._events = { __proto__: null };
|
2015-02-11 17:00:12 -05:00
|
|
|
this._eventsCount = 0;
|
2023-10-29 17:29:59 +01:00
|
|
|
this[kShapeMode] = false;
|
2015-01-16 02:55:29 +09:00
|
|
|
return this;
|
|
|
|
}
|
2011-04-07 16:57:33 +02:00
|
|
|
|
2019-03-26 05:21:27 +01:00
|
|
|
const listeners = events[type];
|
2013-02-20 15:19:05 -08:00
|
|
|
|
2015-01-28 20:05:53 -05:00
|
|
|
if (typeof listeners === 'function') {
|
2015-01-16 02:55:29 +09:00
|
|
|
this.removeListener(type, listeners);
|
2017-10-14 20:49:01 -04:00
|
|
|
} else if (listeners !== undefined) {
|
2015-01-16 02:55:29 +09:00
|
|
|
// LIFO order
|
2019-11-12 16:03:30 +01:00
|
|
|
for (let i = listeners.length - 1; i >= 0; i--) {
|
2017-03-25 22:36:11 +01:00
|
|
|
this.removeListener(type, listeners[i]);
|
|
|
|
}
|
2015-01-16 02:55:29 +09:00
|
|
|
}
|
2012-08-01 00:57:15 +02:00
|
|
|
|
2015-01-16 02:55:29 +09:00
|
|
|
return this;
|
|
|
|
};
|
2010-04-20 18:13:07 -07:00
|
|
|
|
2017-11-25 13:26:28 -05:00
|
|
|
function _listeners(target, type, unwrap) {
|
2024-12-17 10:25:04 +02:00
|
|
|
const events = target._events;
|
|
|
|
|
|
|
|
if (events === undefined)
|
2017-10-14 21:40:51 -04:00
|
|
|
return [];
|
2015-02-05 15:35:33 -05:00
|
|
|
|
2024-12-17 10:25:04 +02:00
|
|
|
const evlistener = events[type];
|
|
|
|
if (evlistener === undefined)
|
|
|
|
return [];
|
2017-10-14 21:40:51 -04:00
|
|
|
|
|
|
|
if (typeof evlistener === 'function')
|
2017-11-25 13:26:28 -05:00
|
|
|
return unwrap ? [evlistener.listener || evlistener] : [evlistener];
|
|
|
|
|
|
|
|
return unwrap ?
|
2020-06-06 14:18:46 -04:00
|
|
|
unwrapListeners(evlistener) : arrayClone(evlistener);
|
2017-11-25 13:26:28 -05:00
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns a copy of the array of listeners for the event name
|
|
|
|
* specified as `type`.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @returns {Function[]}
|
|
|
|
*/
|
2017-11-25 13:26:28 -05:00
|
|
|
EventEmitter.prototype.listeners = function listeners(type) {
|
|
|
|
return _listeners(this, type, true);
|
|
|
|
};
|
2017-10-14 21:40:51 -04:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns a copy of the array of listeners and wrappers for
|
|
|
|
* the event name specified as `type`.
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @returns {Function[]}
|
|
|
|
*/
|
2017-11-25 13:26:28 -05:00
|
|
|
EventEmitter.prototype.rawListeners = function rawListeners(type) {
|
|
|
|
return _listeners(this, type, false);
|
2010-10-12 23:52:26 +03:00
|
|
|
};
|
2013-02-14 00:48:11 -08:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns the number of listeners listening to the event name
|
|
|
|
* specified as `type`.
|
|
|
|
* @deprecated since v3.2.0
|
|
|
|
* @param {EventEmitter} emitter
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @returns {number}
|
|
|
|
*/
|
2013-02-14 00:48:11 -08:00
|
|
|
EventEmitter.listenerCount = function(emitter, type) {
|
2015-09-02 12:23:49 -04:00
|
|
|
if (typeof emitter.listenerCount === 'function') {
|
|
|
|
return emitter.listenerCount(type);
|
|
|
|
}
|
2021-04-16 22:47:20 +05:30
|
|
|
return FunctionPrototypeCall(listenerCount, emitter, type);
|
2015-08-12 00:01:50 +05:30
|
|
|
};
|
|
|
|
|
2015-09-02 12:23:49 -04:00
|
|
|
EventEmitter.prototype.listenerCount = listenerCount;
|
2021-05-18 03:54:16 +04:30
|
|
|
|
|
|
|
/**
|
|
|
|
* Returns the number of listeners listening to event name
|
|
|
|
* specified as `type`.
|
|
|
|
* @param {string | symbol} type
|
2023-02-21 02:38:51 -08:00
|
|
|
* @param {Function} listener
|
2021-05-18 03:54:16 +04:30
|
|
|
* @returns {number}
|
|
|
|
*/
|
2023-02-21 02:38:51 -08:00
|
|
|
function listenerCount(type, listener) {
|
2015-08-12 00:01:50 +05:30
|
|
|
const events = this._events;
|
2015-02-05 15:35:33 -05:00
|
|
|
|
2017-10-14 20:49:01 -04:00
|
|
|
if (events !== undefined) {
|
2015-08-12 00:01:50 +05:30
|
|
|
const evlistener = events[type];
|
|
|
|
|
|
|
|
if (typeof evlistener === 'function') {
|
2023-02-21 02:38:51 -08:00
|
|
|
if (listener != null) {
|
2023-07-10 14:33:51 +08:00
|
|
|
return listener === evlistener || listener === evlistener.listener ? 1 : 0;
|
2023-02-21 02:38:51 -08:00
|
|
|
}
|
|
|
|
|
2015-08-12 00:01:50 +05:30
|
|
|
return 1;
|
2017-10-14 20:49:01 -04:00
|
|
|
} else if (evlistener !== undefined) {
|
2023-02-21 02:38:51 -08:00
|
|
|
if (listener != null) {
|
|
|
|
let matching = 0;
|
|
|
|
|
|
|
|
for (let i = 0, l = evlistener.length; i < l; i++) {
|
|
|
|
if (evlistener[i] === listener || evlistener[i].listener === listener) {
|
|
|
|
matching++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return matching;
|
|
|
|
}
|
|
|
|
|
2015-08-12 00:01:50 +05:30
|
|
|
return evlistener.length;
|
|
|
|
}
|
2015-02-05 15:35:33 -05:00
|
|
|
}
|
|
|
|
|
2015-08-12 00:01:50 +05:30
|
|
|
return 0;
|
2016-01-15 09:53:11 +01:00
|
|
|
}
|
2014-12-20 00:04:35 +01:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns an array listing the events for which
|
|
|
|
* the emitter has registered listeners.
|
2025-05-04 22:42:47 +09:00
|
|
|
* @returns {(string | symbol)[]}
|
2021-05-18 03:54:16 +04:30
|
|
|
*/
|
2016-03-08 21:29:38 -08:00
|
|
|
EventEmitter.prototype.eventNames = function eventNames() {
|
2019-11-19 20:22:12 +08:00
|
|
|
return this._eventsCount > 0 ? ReflectOwnKeys(this._events) : [];
|
2016-03-08 21:29:38 -08:00
|
|
|
};
|
|
|
|
|
2020-06-06 14:18:46 -04:00
|
|
|
function arrayClone(arr) {
|
|
|
|
// At least since V8 8.3, this implementation is faster than the previous
|
|
|
|
// which always used a simple for-loop
|
|
|
|
switch (arr.length) {
|
|
|
|
case 2: return [arr[0], arr[1]];
|
|
|
|
case 3: return [arr[0], arr[1], arr[2]];
|
|
|
|
case 4: return [arr[0], arr[1], arr[2], arr[3]];
|
|
|
|
case 5: return [arr[0], arr[1], arr[2], arr[3], arr[4]];
|
|
|
|
case 6: return [arr[0], arr[1], arr[2], arr[3], arr[4], arr[5]];
|
|
|
|
}
|
2020-11-18 00:31:30 +01:00
|
|
|
return ArrayPrototypeSlice(arr);
|
2015-02-05 15:35:33 -05:00
|
|
|
}
|
2016-03-04 09:00:48 -05:00
|
|
|
|
|
|
|
function unwrapListeners(arr) {
|
2020-06-13 13:35:16 -04:00
|
|
|
const ret = arrayClone(arr);
|
2019-11-12 16:03:30 +01:00
|
|
|
for (let i = 0; i < ret.length; ++i) {
|
2020-06-13 13:35:16 -04:00
|
|
|
const orig = ret[i].listener;
|
|
|
|
if (typeof orig === 'function')
|
|
|
|
ret[i] = orig;
|
2016-03-04 09:00:48 -05:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2019-02-13 13:30:28 +01:00
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns a copy of the array of listeners for the event name
|
|
|
|
* specified as `type`.
|
|
|
|
* @param {EventEmitter | EventTarget} emitterOrTarget
|
|
|
|
* @param {string | symbol} type
|
|
|
|
* @returns {Function[]}
|
|
|
|
*/
|
2020-11-06 12:05:08 +02:00
|
|
|
function getEventListeners(emitterOrTarget, type) {
|
|
|
|
// First check if EventEmitter
|
|
|
|
if (typeof emitterOrTarget.listeners === 'function') {
|
|
|
|
return emitterOrTarget.listeners(type);
|
|
|
|
}
|
|
|
|
// Require event target lazily to avoid always loading it
|
|
|
|
const { isEventTarget, kEvents } = require('internal/event_target');
|
|
|
|
if (isEventTarget(emitterOrTarget)) {
|
|
|
|
const root = emitterOrTarget[kEvents].get(type);
|
|
|
|
const listeners = [];
|
|
|
|
let handler = root?.next;
|
|
|
|
while (handler?.listener !== undefined) {
|
2021-02-04 12:17:44 +02:00
|
|
|
const listener = handler.listener?.deref ?
|
|
|
|
handler.listener.deref() : handler.listener;
|
2021-04-15 15:05:42 +02:00
|
|
|
listeners.push(listener);
|
2020-11-06 12:05:08 +02:00
|
|
|
handler = handler.next;
|
|
|
|
}
|
|
|
|
return listeners;
|
|
|
|
}
|
|
|
|
throw new ERR_INVALID_ARG_TYPE('emitter',
|
|
|
|
['EventEmitter', 'EventTarget'],
|
|
|
|
emitterOrTarget);
|
|
|
|
}
|
|
|
|
|
2023-03-19 08:58:19 -04:00
|
|
|
/**
|
|
|
|
* Returns the max listeners set.
|
|
|
|
* @param {EventEmitter | EventTarget} emitterOrTarget
|
|
|
|
* @returns {number}
|
|
|
|
*/
|
|
|
|
function getMaxListeners(emitterOrTarget) {
|
|
|
|
if (typeof emitterOrTarget?.getMaxListeners === 'function') {
|
|
|
|
return _getMaxListeners(emitterOrTarget);
|
2025-01-30 14:55:18 -05:00
|
|
|
} else if (typeof emitterOrTarget?.[kMaxEventTargetListeners] === 'number') {
|
2023-03-19 08:58:19 -04:00
|
|
|
return emitterOrTarget[kMaxEventTargetListeners];
|
|
|
|
}
|
|
|
|
|
|
|
|
throw new ERR_INVALID_ARG_TYPE('emitter',
|
|
|
|
['EventEmitter', 'EventTarget'],
|
|
|
|
emitterOrTarget);
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Creates a `Promise` that is fulfilled when the emitter
|
|
|
|
* emits the given event.
|
|
|
|
* @param {EventEmitter} emitter
|
2024-06-27 22:56:25 +01:00
|
|
|
* @param {string | symbol} name
|
2021-05-18 03:54:16 +04:30
|
|
|
* @param {{ signal: AbortSignal; }} [options]
|
|
|
|
* @returns {Promise}
|
|
|
|
*/
|
2022-05-21 17:51:52 +08:00
|
|
|
async function once(emitter, name, options = kEmptyObject) {
|
2023-09-29 19:56:20 +09:00
|
|
|
validateObject(options, 'options');
|
2024-11-07 16:59:12 +01:00
|
|
|
const { signal } = options;
|
2020-08-24 13:11:23 -07:00
|
|
|
validateAbortSignal(signal, 'options.signal');
|
2021-01-04 13:50:51 +08:00
|
|
|
if (signal?.aborted)
|
2024-11-07 16:59:12 +01:00
|
|
|
throw new AbortError(undefined, { cause: signal.reason });
|
2019-02-13 13:30:28 +01:00
|
|
|
return new Promise((resolve, reject) => {
|
2020-05-30 16:44:35 +03:00
|
|
|
const errorListener = (err) => {
|
|
|
|
emitter.removeListener(name, resolver);
|
2020-08-24 13:11:23 -07:00
|
|
|
if (signal != null) {
|
2020-08-31 19:41:21 +03:00
|
|
|
eventTargetAgnosticRemoveListener(signal, 'abort', abortListener);
|
2020-08-24 13:11:23 -07:00
|
|
|
}
|
2020-05-30 16:44:35 +03:00
|
|
|
reject(err);
|
|
|
|
};
|
|
|
|
const resolver = (...args) => {
|
|
|
|
if (typeof emitter.removeListener === 'function') {
|
2019-02-13 13:30:28 +01:00
|
|
|
emitter.removeListener('error', errorListener);
|
|
|
|
}
|
2020-08-24 13:11:23 -07:00
|
|
|
if (signal != null) {
|
2020-08-31 19:41:21 +03:00
|
|
|
eventTargetAgnosticRemoveListener(signal, 'abort', abortListener);
|
2020-08-24 13:11:23 -07:00
|
|
|
}
|
2019-02-13 13:30:28 +01:00
|
|
|
resolve(args);
|
|
|
|
};
|
2023-06-22 16:38:02 +03:00
|
|
|
|
|
|
|
kResistStopPropagation ??= require('internal/event_target').kResistStopPropagation;
|
|
|
|
const opts = { __proto__: null, once: true, [kResistStopPropagation]: true };
|
|
|
|
eventTargetAgnosticAddListener(emitter, name, resolver, opts);
|
2020-08-31 19:41:21 +03:00
|
|
|
if (name !== 'error' && typeof emitter.once === 'function') {
|
2022-06-15 04:30:51 +09:00
|
|
|
// EventTarget does not have `error` event semantics like Node
|
|
|
|
// EventEmitters, we listen to `error` events only on EventEmitters.
|
2020-08-31 19:41:21 +03:00
|
|
|
emitter.once('error', errorListener);
|
2019-02-13 13:30:28 +01:00
|
|
|
}
|
2020-08-24 13:11:23 -07:00
|
|
|
function abortListener() {
|
2020-08-31 19:41:21 +03:00
|
|
|
eventTargetAgnosticRemoveListener(emitter, name, resolver);
|
2021-01-17 01:10:31 +08:00
|
|
|
eventTargetAgnosticRemoveListener(emitter, 'error', errorListener);
|
2021-11-28 12:57:58 -08:00
|
|
|
reject(new AbortError(undefined, { cause: signal?.reason }));
|
2020-08-24 13:11:23 -07:00
|
|
|
}
|
|
|
|
if (signal != null) {
|
2020-08-31 19:41:21 +03:00
|
|
|
eventTargetAgnosticAddListener(
|
2023-06-22 16:38:02 +03:00
|
|
|
signal, 'abort', abortListener, { __proto__: null, once: true, [kResistStopPropagation]: true });
|
2020-08-24 13:11:23 -07:00
|
|
|
}
|
2019-02-13 13:30:28 +01:00
|
|
|
});
|
|
|
|
}
|
2019-05-30 17:58:55 +02:00
|
|
|
|
|
|
|
const AsyncIteratorPrototype = ObjectGetPrototypeOf(
|
|
|
|
ObjectGetPrototypeOf(async function* () {}).prototype);
|
|
|
|
|
|
|
|
function createIterResult(value, done) {
|
|
|
|
return { value, done };
|
|
|
|
}
|
|
|
|
|
2020-05-30 16:44:35 +03:00
|
|
|
function eventTargetAgnosticRemoveListener(emitter, name, listener, flags) {
|
|
|
|
if (typeof emitter.removeListener === 'function') {
|
|
|
|
emitter.removeListener(name, listener);
|
|
|
|
} else if (typeof emitter.removeEventListener === 'function') {
|
|
|
|
emitter.removeEventListener(name, listener, flags);
|
|
|
|
} else {
|
|
|
|
throw new ERR_INVALID_ARG_TYPE('emitter', 'EventEmitter', emitter);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
function eventTargetAgnosticAddListener(emitter, name, listener, flags) {
|
|
|
|
if (typeof emitter.on === 'function') {
|
2021-01-04 13:50:51 +08:00
|
|
|
if (flags?.once) {
|
2020-05-30 16:44:35 +03:00
|
|
|
emitter.once(name, listener);
|
|
|
|
} else {
|
|
|
|
emitter.on(name, listener);
|
|
|
|
}
|
|
|
|
} else if (typeof emitter.addEventListener === 'function') {
|
2022-06-15 04:30:51 +09:00
|
|
|
emitter.addEventListener(name, listener, flags);
|
2020-05-30 16:44:35 +03:00
|
|
|
} else {
|
|
|
|
throw new ERR_INVALID_ARG_TYPE('emitter', 'EventEmitter', emitter);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-18 03:54:16 +04:30
|
|
|
/**
|
|
|
|
* Returns an `AsyncIterator` that iterates `event` events.
|
|
|
|
* @param {EventEmitter} emitter
|
|
|
|
* @param {string | symbol} event
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
* @param {{
|
|
|
|
* signal: AbortSignal;
|
|
|
|
* close?: string[];
|
2024-03-15 00:59:35 +02:00
|
|
|
* highWaterMark?: number,
|
|
|
|
* lowWaterMark?: number
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
* }} [options]
|
2021-05-18 03:54:16 +04:30
|
|
|
* @returns {AsyncIterator}
|
|
|
|
*/
|
2022-12-31 16:33:39 +09:00
|
|
|
function on(emitter, event, options = kEmptyObject) {
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
// Parameters validation
|
2023-09-29 19:56:20 +09:00
|
|
|
validateObject(options, 'options');
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
const signal = options.signal;
|
2020-08-24 13:48:15 -07:00
|
|
|
validateAbortSignal(signal, 'options.signal');
|
2021-03-11 07:59:53 -08:00
|
|
|
if (signal?.aborted)
|
2024-11-07 16:59:12 +01:00
|
|
|
throw new AbortError(undefined, { cause: signal.reason });
|
2024-03-15 00:59:35 +02:00
|
|
|
// Support both highWaterMark and highWatermark for backward compatibility
|
|
|
|
const highWatermark = options.highWaterMark ?? options.highWatermark ?? NumberMAX_SAFE_INTEGER;
|
|
|
|
validateInteger(highWatermark, 'options.highWaterMark', 1);
|
|
|
|
// Support both lowWaterMark and lowWatermark for backward compatibility
|
|
|
|
const lowWatermark = options.lowWaterMark ?? options.lowWatermark ?? 1;
|
|
|
|
validateInteger(lowWatermark, 'options.lowWaterMark', 1);
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
|
|
|
|
// Preparing controlling queues and variables
|
|
|
|
FixedQueue ??= require('internal/fixed_queue');
|
|
|
|
const unconsumedEvents = new FixedQueue();
|
|
|
|
const unconsumedPromises = new FixedQueue();
|
|
|
|
let paused = false;
|
2019-05-30 17:58:55 +02:00
|
|
|
let error = null;
|
|
|
|
let finished = false;
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
let size = 0;
|
2019-05-30 17:58:55 +02:00
|
|
|
|
|
|
|
const iterator = ObjectSetPrototypeOf({
|
|
|
|
next() {
|
|
|
|
// First, we consume all unread events
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
if (size) {
|
|
|
|
const value = unconsumedEvents.shift();
|
|
|
|
size--;
|
|
|
|
if (paused && size < lowWatermark) {
|
|
|
|
emitter.resume();
|
|
|
|
paused = false;
|
|
|
|
}
|
2019-05-30 17:58:55 +02:00
|
|
|
return PromiseResolve(createIterResult(value, false));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Then we error, if an error happened
|
|
|
|
// This happens one time if at all, because after 'error'
|
|
|
|
// we stop listening
|
|
|
|
if (error) {
|
|
|
|
const p = PromiseReject(error);
|
|
|
|
// Only the first element errors
|
|
|
|
error = null;
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If the iterator is finished, resolve to done
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
if (finished) return closeHandler();
|
2019-05-30 17:58:55 +02:00
|
|
|
|
|
|
|
// Wait until an event happens
|
|
|
|
return new Promise(function(resolve, reject) {
|
2021-04-15 15:05:42 +02:00
|
|
|
unconsumedPromises.push({ resolve, reject });
|
2019-05-30 17:58:55 +02:00
|
|
|
});
|
|
|
|
},
|
|
|
|
|
|
|
|
return() {
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
return closeHandler();
|
2019-05-30 17:58:55 +02:00
|
|
|
},
|
|
|
|
|
|
|
|
throw(err) {
|
|
|
|
if (!err || !(err instanceof Error)) {
|
|
|
|
throw new ERR_INVALID_ARG_TYPE('EventEmitter.AsyncIterator',
|
|
|
|
'Error', err);
|
|
|
|
}
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
errorHandler(err);
|
2019-05-30 17:58:55 +02:00
|
|
|
},
|
|
|
|
[SymbolAsyncIterator]() {
|
|
|
|
return this;
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
},
|
|
|
|
[kWatermarkData]: {
|
|
|
|
/**
|
|
|
|
* The current queue size
|
|
|
|
*/
|
|
|
|
get size() {
|
|
|
|
return size;
|
|
|
|
},
|
|
|
|
/**
|
|
|
|
* The low watermark. The emitter is resumed every time size is lower than it
|
|
|
|
*/
|
|
|
|
get low() {
|
|
|
|
return lowWatermark;
|
|
|
|
},
|
|
|
|
/**
|
|
|
|
* The high watermark. The emitter is paused every time size is higher than it
|
|
|
|
*/
|
|
|
|
get high() {
|
|
|
|
return highWatermark;
|
|
|
|
},
|
|
|
|
/**
|
2023-02-25 21:29:59 +01:00
|
|
|
* It checks whether the emitter is paused by the watermark controller or not
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
*/
|
|
|
|
get isPaused() {
|
|
|
|
return paused;
|
2023-02-24 09:45:04 +01:00
|
|
|
},
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
},
|
2019-05-30 17:58:55 +02:00
|
|
|
}, AsyncIteratorPrototype);
|
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
// Adding event handlers
|
|
|
|
const { addEventListener, removeAll } = listenersController();
|
|
|
|
kFirstEventParam ??= require('internal/events/symbols').kFirstEventParam;
|
|
|
|
addEventListener(emitter, event, options[kFirstEventParam] ? eventHandler : function(...args) {
|
|
|
|
return eventHandler(args);
|
|
|
|
});
|
2020-08-31 19:41:21 +03:00
|
|
|
if (event !== 'error' && typeof emitter.on === 'function') {
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
addEventListener(emitter, 'error', errorHandler);
|
|
|
|
}
|
|
|
|
const closeEvents = options?.close;
|
|
|
|
if (closeEvents?.length) {
|
|
|
|
for (let i = 0; i < closeEvents.length; i++) {
|
|
|
|
addEventListener(emitter, closeEvents[i], closeHandler);
|
|
|
|
}
|
2020-05-30 16:44:35 +03:00
|
|
|
}
|
2024-03-15 11:10:59 -04:00
|
|
|
|
|
|
|
const abortListenerDisposable = signal ? addAbortListener(signal, abortListener) : null;
|
2019-05-30 17:58:55 +02:00
|
|
|
|
|
|
|
return iterator;
|
|
|
|
|
2020-08-24 13:48:15 -07:00
|
|
|
function abortListener() {
|
2021-11-28 12:57:58 -08:00
|
|
|
errorHandler(new AbortError(undefined, { cause: signal?.reason }));
|
2020-08-24 13:48:15 -07:00
|
|
|
}
|
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
function eventHandler(value) {
|
|
|
|
if (unconsumedPromises.isEmpty()) {
|
|
|
|
size++;
|
|
|
|
if (!paused && size > highWatermark) {
|
|
|
|
paused = true;
|
|
|
|
emitter.pause();
|
|
|
|
}
|
|
|
|
unconsumedEvents.push(value);
|
|
|
|
} else unconsumedPromises.shift().resolve(createIterResult(value, false));
|
2019-05-30 17:58:55 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
function errorHandler(err) {
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
if (unconsumedPromises.isEmpty()) error = err;
|
|
|
|
else unconsumedPromises.shift().reject(err);
|
2019-05-30 17:58:55 +02:00
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
closeHandler();
|
|
|
|
}
|
2019-05-30 17:58:55 +02:00
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
function closeHandler() {
|
2024-03-15 11:10:59 -04:00
|
|
|
abortListenerDisposable?.[SymbolDispose]();
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
removeAll();
|
|
|
|
finished = true;
|
|
|
|
const doneResult = createIterResult(undefined, true);
|
|
|
|
while (!unconsumedPromises.isEmpty()) {
|
|
|
|
unconsumedPromises.shift().resolve(doneResult);
|
2019-05-30 17:58:55 +02:00
|
|
|
}
|
|
|
|
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
return PromiseResolve(doneResult);
|
2019-05-30 17:58:55 +02:00
|
|
|
}
|
|
|
|
}
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
|
|
|
|
function listenersController() {
|
|
|
|
const listeners = [];
|
|
|
|
|
|
|
|
return {
|
|
|
|
addEventListener(emitter, event, handler, flags) {
|
|
|
|
eventTargetAgnosticAddListener(emitter, event, handler, flags);
|
|
|
|
ArrayPrototypePush(listeners, [emitter, event, handler, flags]);
|
|
|
|
},
|
|
|
|
removeAll() {
|
|
|
|
while (listeners.length > 0) {
|
|
|
|
ReflectApply(eventTargetAgnosticRemoveListener, undefined, ArrayPrototypePop(listeners));
|
|
|
|
}
|
2023-02-24 09:45:04 +01:00
|
|
|
},
|
lib: performance improvement on readline async iterator
Using a direct approach to create the readline async iterator
allowed an iteration over 20 to 58% faster.
**BREAKING CHANGE**: With that change, the async iteterator
obtained from the readline interface doesn't have the
property "stream" any longer. This happened because it's no
longer created through a Readable, instead, the async
iterator is created directly from the events of the readline
interface instance, so, if anyone is using that property,
this change will break their code.
Also, the Readable added a backpressure control that is
fairly compensated by the use of FixedQueue + monitoring
its size. This control wasn't really precise with readline
before, though, because it only pauses the reading of the
original stream, but the lines generated from the last
message received from it was still emitted. For example:
if the readable was paused at 1000 messages but the last one
received generated 10k lines, but no further messages were
emitted again until the queue was lower than the readable
highWaterMark. A similar behavior still happens with the
new implementation, but the highWaterMark used is fixed: 1024,
and the original stream is resumed again only after the queue
is cleared.
Before making that change, I created a package implementing
the same concept used here to validate it. You can find it
[here](https://github.com/Farenheith/faster-readline-iterator)
if this helps anyhow.
PR-URL: https://github.com/nodejs/node/pull/41276
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-24 09:49:16 -03:00
|
|
|
};
|
|
|
|
}
|