Skip to content

Conversation

@Beilinson
Copy link
Contributor

@Beilinson Beilinson commented Oct 10, 2025

Description

Convert Matrix4 and Matrix3 to be based on Float64Array. This provides a significant 2x speedup of Matrix multiplication and other methods, and saves memory. This results in a 4x improvement on Model.pick and propagates throughout other areas.

Matrices are at the heart of 3D graphics, and any performance improvement here radiates throughout the entire codebase.

I approached this primarily to speed up pickModel, where getVertexPosition is primarily limited by the performance of Matrix4.multiplyByPoint.

By making Matrix4 a class extending Float64Array, we get a 2x speedup in that method (as well as various levels of speedups on other methods):

Image

I show the performance improvements of this combined with @javagl 's branch #12658 in #11814.

Importantly, changing the backing datastructure to Float64Array has 100% identical behavior and results as the previous implementation. As per https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number#number_encoding, regular numbers in JS are also 64 bit floats. By using the class .. extends syntax, we continue having values accessed by doing matrix[i], rather than forcing something like matrix.buffer[i].

Testing plan

Test suite + visual comparison between sandcastles.

Author checklist

  • I have submitted a Contributor License Agreement
  • I have added my name to CONTRIBUTORS.md
  • I have updated CHANGES.md with a short summary of my change
  • I have added or updated unit tests to ensure consistent code coverage
  • I have updated the inline documentation, and included code examples where relevant
  • I have performed a self-review of my code

@github-actions
Copy link

Thank you for the pull request, @Beilinson!

✅ We can confirm we have a CLA on file for you.

@Beilinson Beilinson mentioned this pull request Oct 10, 2025
6 tasks
@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 10, 2025

Copy of conversation from #12968:
@jjspace:

I think the limitation that it's not possible to .freeze() a typed array in JS (another explination) is enough to give me pause about changing this implementation.

The identity and zero matrices are used frequently across CesiumJS and I'd assume other projects that use CesiumJS. There needs to be a way to guarantee these are always the expected values. Users should not be allowed to do this:

console.log(Cesium.Matrix3.IDENTITY);
Cesium.Matrix3.IDENTITY[2] = 27.0;
console.log(Cesium.Matrix3.IDENTITY);

Currently this code throws an error but in this branch this works just fine and the identity is now wrong in every place it's used.

It's possible this could become a getter style function that always returns a new matrix but that defeats the purpose of having a shared object and could create more memory issues as it creates new objects every time.

Are there ways to mix frozen, normal arrays and typed arrays? Is there a different way to change internal structure based on expected mutability? Would that just make it more painful and inconsistent to use?

I don't know the answer to these questions but we need to figure it out before pushing forward with this change.

@Beilinson:

There is no issue mixing old object matrices and new array ones, so keeping these as old frozen object based matrices is a valid solution.

However:

1. It will slow down the total performance gains to make the frozen matrices not typed array based

2. if anyone changes the frozen matrices that would instantly break the entire application (so in my opinion it's an unreasonable expectation)

3. Nothing prevents in the current cesium code someone from doing `Cesium.Matrix3.IDENTITY = new Cesium.Matrix3()`, since the actual `Matrix3,Matrix4,etc.` namespace objects aren't frozen themselves.

If after further consideration you believe it's still important these specifically can be reverted to frozen object matrices

@javagl
Copy link
Contributor

javagl commented Oct 11, 2025

The missing freezes and the different equality checks also caught my attention here.

While Object.freeze cannot be applied to a typed array, it might in theory be possible to replace these constants with read-only proxies...

https://sandcastle.cesium.com/index.html#c=ZVJNb9swDP0rhE8eYNhNUuwSN2gxoKd+DNhuVQ+KTCfaZMqQqDTJ0P8+yR/r2go6iNR7j+QDlZHew71kp48PgEdGajzcGit5tbxxTp4E/REEoCx5dkGxdTl9gSEH4EOPKV6n8FVQvIIGKHRwBYQvs3a++JpQgqoKfu61h3h7673eGhTUPV08R/xiuVqPfGuwNHaXp4+R1wZSrC1B6xDPmNvtL1Q8dTKW3EtqDLooNLeHnLN0O+QCemf7Ag7SBPzX/kica4ns1tkzEhy0TPDjSWTjZOk45OBidWk8TsnXYhh7/dbBwJoG/57eeVfMbY1Sk8wAXL8zrI28abbuk1VkwVjaxeHeTGtH05aryw+mtZNpWZHVnk8GN6nyte566xiCM3lZVoxdbySjr7ZB/UYulfeJVFczpW70AXRzJTKFXofumyWWmtCJDFTam/jTBmN+6DOKbFNXEf+OFreo0bR7PKAzcZEiZL/Y3I3JsizrKoafWWyt2Ur3n+Jf

... but it's not unlikely that this comes at a severe performance cost (or other drawbacks that I don't have on the radar - this is just spitballing...)

From an "object-oriented design" perspective I don't like that the Matrix4 really is-a Float64Array after this change. Suddenly, Matrix4 has a buffer property, and people will use this (to do all sorts of messy things - and probably even to work around frozen-ness...). But given the limitations of the "language" (JavaScript), I don't know whether this is considered to be important.

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 11, 2025

The missing freezes and the different equality checks also caught my attention here.

While Object.freeze cannot be applied to a typed array, it might in theory be possible to replace these constants with read-only proxies...

I made a freezeMatrix helper which does something similar, I'll push it up after testing + explainer.

... but it's not unlikely that this comes at a severe performance cost (or other drawbacks that I don't have on the radar - this is just spitballing...)

Proxies aren't very efficient sadly, but I managed to avoid them in my solution !

From an "object-oriented design" perspective I don't like that the Matrix4 really is-a Float64Array after this change.

Right, this is the big thing about this. From a performance perspective, the fact that it really is a Float64Array is probably what allows V8 and other engines to optimize the matrix operations to such a degree.

Suddenly, Matrix4 has a buffer property, and people will use this (to do all sorts of messy things - and probably even to work around frozen-ness...). But given the limitations of the "language" (JavaScript), I don't know whether this is considered to be important.

I was concerned about this quite a lot as well, so I made sure to check the generated TS definitions in Cesium.d.ts. Because of the jsdoc on top of the class definition, the extension is actually hidden away, so in external consuming applications these matrices look exactly the same as they did before (of course, they could inspect them while debugging, but thats not really the issue).

image

* @type {Readonly<Matrix4>}
* @constant
*/
Matrix4.IDENTITY = freezeMatrix(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently using the freezeMatrix utility rather than makeFrozenMatrix, but they are fundamentally the same. The actual matrix is just a regular object that behaves exaclty like the previous object matrix, which are 100% compatible within the codebase because I didn't change any of the behavior when using a new Float64Array matrix.

The main difference is future-proofing/avoiding accidents where someone does choose to work with them as real typed arrays (maybe for performance reasons or whatever), in which case freezeMatrix is a much more solid solution that won't accidentally break because apparently the IDENTITY and ZERO matrices dont behave like Typed Arrays (which is the case with makeFrozenMatrix4).

Unlike with a Proxy, there is no misdirection so V8 can still attempt to optimize these, albeit only to the performance level of the old matrices.

* @returns {Readonly<Matrix4>} a frozen matrix
*/
// eslint-disable-next-line no-unused-vars
function makeFrozenMatrix4(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is basically just creating the old matrix type and manually binding the few added functions.

@Beilinson
Copy link
Contributor Author

@javagl @jjspace I added two possibilities for keeping the IDENTITY and ZERO matrices frozen, I am personally leaning towards the freezeMatrix version with full compatibility, what do you think?

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 11, 2025

the different equality checks also caught my attention here.

Jasmine apparently uses _.isEqual instead of regular === comparison, so in jasmine -0 !== 0 for some reason.

In consuming code and throughout there is no other way to compare matrices other than Matrix4.equals or equivalent for other matrix types, so this has no affect on any other part of the codebase or consuming applications

@javagl
Copy link
Contributor

javagl commented Oct 11, 2025

Just as a short "ack": I quickly looked over the freezing approaches. The makeFrozenMatrix one is more accessible and easier to understand, but... freezeMatrix raises the bar pretty high in that comparison 😁 Which means: I'm not really a JavaScript expert, and have no clue what is happening there... Others will have to chime in here and say which one is "better".

About the equality: I'll (also) have to dive a bit deeper into that. The shallow way of phrasing my question could be: Was it necessary to change the equality check? If so, very roughly speaking: Whatever was done there (deep in Jasmine) could be done in normal client code, meaning that there is some ~"breaking change" hidden here. This would likely not be a dealbreaker, because it apparently would be an obscure corner case - but I'm usually curious about these....

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 11, 2025

Found the relevant code in jasmine, this seems pretty obfuscated to me:

https://github.com/jasmine/jasmine/blob/bad9c63bf72fd32d6fb816185276036dc8aed6bd/src/core/matchers/matchersUtil.js#L100-L102

// Identical objects are equal. `0 === -0`, but they aren't identical.
// See the [Harmony `egal` proposal](http://wiki.ecmascript.org/doku.php?id=harmony:egal).
if (a === b) { return a !== 0 || 1 / a == 1 / b; }

Basically they explicitly test this case because they want to compare by value exactly and not by JS regular equality comparison. This check works because 1/0 === Infinity, while 1/-0 === -Infinity, which is the only difference between 0 and -0. Im pretty sure this isn't a relevant concern for matrix operations, Infinity/-Infinity isn't a real value that should appear in matrix operations right🤔?

If this is a noteworthy issue I guess I could add a note to the changelog? I'm really not sure that it is though.

I found some stackoverflow answers saying that jasmines equality comparison was taken from underscorewhich is probably pretty popular. Whether anyone is using underscore to compare matrices in consuming code is anyones guess

@Beilinson
Copy link
Contributor Author

Just as a short "ack": I quickly looked over the freezing approaches. The makeFrozenMatrix one is more accessible and easier to understand, and but... freezeMatrix raises the bar pretty high in that comparison 😁 Which means: I'm not really a JavaScript expert, and have no clue what is happening there... Others will have to chime in here and say which one is "better".

Yeah thats basically what I felt as well, and I wrote it 😂

It could probably do with some more small well named functions, but theres a limit to how simple I can make brute force copying an objects entire prototype chain onto a new object while preventing access to specific dangerous aspects 😶‍🌫️

@javagl
Copy link
Contributor

javagl commented Oct 12, 2025

Ahh, yeah... the equality. There's always a chance to bring up https://eqeq.js.org/

One reason for why I'm asking is that some Jasmine matchers are re-wired in addDefaultMatchers.js. For completeness, let me quote the full documentation of this file here:


I knew that there was some trickery going on with if (typeof a.equalsEpsilon === "function") for the epsilon-case, and thought that something similar might exist for the Matrix/Cartesian/... classes that offer some equals function, but that doesn't seem to be the case. It does appear change toEqual in some way, but I'm not sure in how far it deviates from the Jasmine built-in version.

EDIT: This overridden toEqual does to some checks for isTypedArray. By the way how this is implemented, I think that it does not catch the case of the Matrix4 here (because it checks for the exact type, and not some instanceof), but I'll have to look more closely at all this...)


About the extends: Indeed. the @implements {ArrayLike<number>} seems to "override" what could otherwise be extracted from the extends statement. The point is: If that extends caused the Float64Array to appear in the (TypeScript) type, then this would probably mean that something like
const m : Float64Array = Matrix4.IDENTITY; /* The frozen one!*/
would fail, but that doesn't seem to be the case.

(I could try to explain my lack of knowledge here - either by saying that I'm a Java guy (where all this is soooo much cleaner and clearer), or by saying that nobody really knows what the mix of JavaScript/JSDoc/TSDoc and the associated tooling are actually doing. But eventually, the truth is what is written into the the d.ts file, and that seems to be unchanged).

@Beilinson
Copy link
Contributor Author

const m : Float64Array = Matrix4.IDENTITY; //typeerror (Frozen matrix is still typed as a `Matrix4`)

Both the above and below should act the same

const m : Float64Array = new Matrix4(); // same typeerror

Since we also explicitly define the type of the identity and zero matrices as Matrix4 (or equivalent for other matrices).

A user would have to explicitly cast using new Matrix4() as any as Float64Array, the above const m: Float64Array = ... wouldn't work because the Float64Array type does not appear in the generated types as you said.

@javagl
Copy link
Contributor

javagl commented Oct 12, 2025

I'll have to leave some of the considerations here to others (or spend more time with dedicated tests and reading).

But a small correction to what I said earlier: The re-defined toEqual matcher eventually calls a function

function isTypedArray(o) {
  return FeatureDetection.typedArrayTypes.some(function (type) {
    return o instanceof type;
  });
}

And (iff I'm not completely wrong here), this check will return true after the ...extends Float64Array, but did not return true before this change. (Not sure how important that is - just as a pointer for now)

@jjhembd
Copy link
Contributor

jjhembd commented Oct 13, 2025

The performance improvement is impressive. It makes sense: with TypedArrays, the runtime can predict ahead of time exactly how much memory a given matrix will need.

I have 3 questions:

  • Have we looked at what other JS matrix libraries do? For example, glMatrix already uses TypedArrays. The discussion there about further modernization could give some hint at possible pitfalls. It was mentioned that TypedArray takes slightly longer to create. But for CesiumJS, this shouldn't be an issue since we are using pre-created scratch variables almost everywhere.
  • Why only Matrix3 and Matrix4? I would think Matrix4.multiplyByPoint could be even faster if Cartesian3 was a TypedArray.
  • For picking in particular: Are the matrix multiplications on the CPU just emulating what happened on the GPU? If so, can we get more speedup by using Float32Array to mimic the GPU precision?

@Beilinson
Copy link
Contributor Author

Hey @jjhembd

Have we looked at what other JS matrix libraries do? For example, toji/gl-matrix#453. The discussion there about further modernization could give some hint at possible pitfalls. It was mentioned that TypedArray takes slightly longer to create. But for CesiumJS, this shouldn't be an issue since we are using pre-created scratch variables almost everywhere.

I haven't heard of that library but sounds very relevant. From benchmarking I also saw that TypedArray took slightly longer to create, but at the end of the day its a tradeoff between the total count of operations on these matrices and their performance vs the creation, so unless there are more matrices being created than operations being done on them this is still a net performance gain.

Why only Matrix3 and Matrix4? I would think Matrix4.multiplyByPoint could be even faster if Cartesian3 was a TypedArray.

Probably Matrix2 should get this as well, regarding the Cartesian classes:

My main goal in this PR was to competely avoid breaking changes. The only way to I thought of how to do that with Cartesians would be to define all the existing x/y/z properties as getter/setters on the underlying buffer, but that would negate the entire point of the benefit, as now instead of accessing the element directly on the object we would have to go through the overhead of getters/setters (this ended up slowing down performance quite a lot from my experimenting).

The other option was to completely break the Cartesian classes and require the use of [0/1/2] indexing in order to access the elements (like with the Matrix classes), but thats not really an option because all external code relying on these classes would break (and also the amount of internal code to update is immense).

If you have any ideas how to do this without breaking changes and still improving performance that would be great however!

For picking in particular: Are the matrix multiplications on the CPU just emulating what happened on the GPU? If so, can we get more speedup by using Float32Array to mimic the GPU precision?

I also had this question, and found some very weird results.

  1. Matrix operations were either equivalent or slightly slower with Float32 than with Float64. My assumption is that since the operations are done by V8, they are treated as regular numbers. Since the JS spec only really acknowledges Float64 or in some cases pure integer values, probably all these Float32 numbers have to get converted to and from Float64 for each operation.
image
  1. I did see that creating a Float32Array of size 16 was actually about 4x faster than the same Float64Array
  2. Despite that, it seems that cesium really needs the Float64 accuracy. Even though most of the tests passed, when I tried running the sandcastle it would crash instantly when using a Float32Array.

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 13, 2025

I see that the toji/gl-matrix#453 issue also mentions the idea of getter/setters for their Vec3 class:

export class Cartesian3 extends Float64Array {
  get x() { return this[0]; }
  set x(value) { this[0] = value; }

  get y() { return this[1]; }
  set y(value) { this[1] = value; }

  get z() { return this[2]; }
  set z(value) { this[2] = value; }
}

This is the result of testing the multiplyByPoint with regular object cartesians vs float64 based cartesians like the above:
image
If I convert the multiplyByPoint code to access the cartesian by index rather than by x/y/z, these are the new results:

image

Surprisingly its a net loss regardless of how we look at it

@javagl
Copy link
Contributor

javagl commented Oct 13, 2025

It may be a bit of a tangent (at first glance), but I see some connection to #12780 here. Imagine Cartesian3 was an interface with different implementations. You could create a new Float64Array(3 * 100) that could be used as the backing array for 100 Cartesian3 objects, and perform certain (bulk) operations direcly on this array.
(Something like this could be possible for Matrix4 right now already, if there was a constructor that received an ArrayBuffer - but there isn't a clear path for how that could be exploited...)

@mzschwartz5
Copy link
Contributor

mzschwartz5 commented Oct 21, 2025

@javagl @Beilinson

I read about half of this thread, so disregard me if I'm not fully informed, but this comment from @javagl caught my attention:

From an "object-oriented design" perspective I don't like that the Matrix4 really is-a Float64Array after this change. Suddenly, Matrix4 has a buffer property, and people will use this (to do all sorts of messy things - and probably even to work around frozen-ness...). But given the limitations of the "language" (JavaScript), I don't know whether this is considered to be important.

So instead of a Matrix4 that is-a Float64Array, why don't we make a Matrix4 that has-a Float64Array? That member can be private, with no accessible getter, so that no one changes it. You don't really even need to worry about freezing it in that case. Of course, Javascript won't throw an error for trying to access private members, but that's the language we're working with. It still stands that when you see a private member, the contract you're supposed to abide by is that you do not access it.

@javagl
Copy link
Contributor

javagl commented Oct 21, 2025

why don't we make a Matrix4 that has-a Float64Array?

@Beilinson May chime in with additional details, but I think that some of the performance benefits could be attributed to the fact that one could still use the matrix[index] syntax and this directly (and inherently) works, because it is a Float64Array. There is also no way to let that matrix[index] = 42; have the desired semantics of matrix.thatInternalArray[index] = 42;...

@mzschwartz5
Copy link
Contributor

mzschwartz5 commented Oct 21, 2025

I think you'd have to do some testing to back that claim. I'm not so sure that's where the performance benefits are coming from - I think the main benefit is that float64array is backed by a contiguous block of memory, so access is faster and more cache friendly.

Could be wrong though, that's just my hunch! As it stands, the Matrix4 class has named getters, so the performance benefit is not from the direct bracket access, then we can keep the class as-is and also save some trouble in renaming matrix usage across the cesiumjs codebase.

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 21, 2025

@javagl is exactly right, the goal of the PR was to avoid any breaking changes. My original intent was to hold the Float64Array internally, use it directly in all the code, and only expose the getters/setters for backwards compatibility with external non-cesium code.

My main issues with this were:

  1. It ended up being too large a refactor for me to take on
  2. I felt that using a completely private and internal member of a class directly in other classes is pretty bad OOP :(

I then tried the refactor only using getters/setters on the internal array, but the performance of that was mixed - for the most part it was significantly slower than the older code. I'm not a v8 wizard, but I'd have to assume based on my reading of https://v8.dev/blog/fast-properties and some other related blog posts (https://v8.dev/blog/dataview discusses a bit of the internal TypedArray stuff) that these are more or less the following V8 steps when accessing the matrix based on the different structures:

  1. Old (current version prior to this PR): The matrix has a hidden class that uses elements to store the actual values. Most likely this is actually stored as an internal array, just not a TypedArray. That means that access to that element is more or less equivalent to access to a value on an array. However, because its not a typed array, these elements may be of any type. In reality V8 is pretty conservative and never assumes more about your data than it is first given, and since matrices always contain numbers this is probably assumed to be a number array. However, these numbers are not contiguously stored. V8 has two types of numbers (theres a bit about this on https://v8.dev/blog/mutable-heap-number): SMI - stored directly on the property/element, and heap numbers, stored on the heap (hopefully contiguously. However, because Matrices are initialized with default values of 0.0, these are always (at least in v8) SMIs, so all functions using matrices must account for two types of number memory access, and three types of math operations: between SMIs, between heap numbers, and between both.
  2. Matrix with internal array + getter/setter on that: Because the elements can no longer be 100% analyzed as indices into a single array (each getter could technically access a different array), then for each get/set the compiled code has to traverse the pointer to the backing array and then index that instead. This is additional overhead which may cause the idea of "contiguous memory" to falter
  3. Matrix that extends a Float64Array: Here we essentially provide a further assurance to the compiler - the values are contiguous, of a set length, and of given certain element type. All code that uses and accesses matrices may be optimized because the compiler can ascertain (as long as the function is monomorphic, which it should always be here) that these are all functions working with one or two contiguous memory buffers of Float64 values. I'm guessing that maybe the compiled code itself may be a bit faster because all values are definitively treated as Float64 off an underlying buffer, unlike the complexity that V8 forces with SMIs as described in 1.

Its extremely important for me to mention that a lot of what I mentioned are assumptions from reading the above blogs, so of course seeing some real numbers would be nice:

Performance Results

I ran 4 version of 1000 randomized matrices through https://jsbenchmark.com, benchmarking multiplyByPoint with 1000 randomized cartesians.

image
  1. Object matrix (same matrix code as in main currently)
  2. Array matrix (similar to this PR, but class Matrix4 extends Array instead)
  3. Float64Matrix (same as this pr)
  4. Getter matrix (something that looks like get 0() { return this.float64Array[0]; } for all elements

@Beilinson
Copy link
Contributor Author

I think you'd have to do some testing to back that claim. I'm not so sure that's where the performance benefits are coming from - I think the main benefit is that float64array is backed by a contiguous block of memory, so access is faster and more cache friendly.

Could be wrong though, that's just my hunch! As it stands, the Matrix4 class has named getters, so the performance benefit is not from the direct bracket access, then we can keep the class as-is and also save some trouble in renaming matrix usage across the cesiumjs codebase.

Not sure exactly what you meant by this, there are no named getters except for length, and this PR doesn't cause any renaming since its completely hidden by the power of ES6 classes!

Btw, if its something important that the new syntax explicitly is a class extending Float64Array, we could do regular ES5 prototype chain manipulation to get the same extension behavior and performance benefits, but in my opinion this is much less healthy for the codebase long term (looking at #8359)

@Beilinson Beilinson closed this Oct 21, 2025
@Beilinson Beilinson reopened this Oct 21, 2025
@Beilinson
Copy link
Contributor Author

In the previous comment I showed the performance on Matrix4.multiplyByPoint. In fact, Matrix4.multiply shows an even better performance improvement:

image

This is because as I explained in the above comment, the actual update of the memory values as well as the math operations may be significantly faster on Float64Array than even on a regular Array, and getter/setters aren't even comparable in performance to the existing object matrices.

@mzschwartz5
Copy link
Contributor

mzschwartz5 commented Oct 21, 2025

Going to try to clarify my above comment.

I'm talking about functions like Matrix4.getElementIndex. In my proposed version of this PR, the public interface for Matrix4 is the same, but instead of being ArrayLike, it holds an internal Float64Array. Users and internal Cesium code continues to access matrices via these methods, so the refactor is limited to only the Matrix4 module.

Based on what you've said, it sounds like you were originally planning to do something like this by making Matrix4 extend Float64Array but, as @javagl said, this exposes this internal buffer object unintentionally (and undesirably). You also said this would be a big refactor to allow Cesium-internal code to access the Float64Array directly (via bracket notation). I'm suggesting ditching the extends approach (inheritance) and using a composition approach.

You also said you fear indirect access via methods like getElementIndex degrades the performance benefits of the Float64Array approach, but I don't know if that's true. I think the main benefits come from the memory contiguity of the Float64Array, not the "direct" access via brackets. My apologies if I'm misunderstanding your concerns, though.

@Beilinson
Copy link
Contributor Author

Beilinson commented Oct 21, 2025

Thanks for clarifying @mzschwartz5 , so to clarify and summarize the concerns from the rest of the team as I understand them and my stance on them:

  1. Having Matrix4 extend Float64Array is the preferred approach by me here, the fact that an internal buffer is in privately exposed (not at the typescript level), is equivalent to storing an internal array as a untyped private member. Either way the user could do either matrix.buffer[0] or matrix.internalArray[0], and either way typescript would scream at them :) and the documentation would not expose these internal details.
  2. From what I have read of the V8 blogs, the "direct" access most likely is a big part of the performance improvement, introducing getters/setters/getElementIndex provides an additional level of indirection that V8 struggles to optimize for. The performance benchmarks above reflect this.
  3. The big refactor I said I feared would be to refactor all of cesium to do matrix.internalArray[0], which ended up causing this PR to be extremely long and repetitive (and I just had endless errors because there are far more files than just MatrixN that are touching the matrix internals sadly). This also had zero improvement over just extending directly (see point 1 for my reasoning)
  4. There is a discussion about the ability to freeze the IDENTITY and ZERO matrices. I have two possible implementations here to solve the issue of freezing the new extended matrices
  5. There is also a discussion regarding the equality comparison in jasmine specifically, I'm not sure what the current stance is on this and I'm not sure whether there is an issue here

I apologize if there are any other concerns/pointers I missed!

@javagl
Copy link
Contributor

javagl commented Oct 22, 2025

Based on what you've said, it sounds like you were originally planning to do something like this by making Matrix4 extend Float64Array but, as @javagl said, this exposes this internal buffer object unintentionally (and undesirably).

The response to my concern (above) sounded convincing for me. I may have overestimated the implications - my Java backround made me think ~"Oh dear, people will do const m : Float64Array = new Matrix4();" and then (worse) m.buffer..... But the typing looks clean (so people don't see that it is a Float64Array), and beyond that, in JavaScript, people can mess with all._sorts._of["things"] if they want to. The main point is: I now think that it's not as concerning as I initially thought. (I'd still understand hesitation and scrutiny, though - we should be absolutely sure, given the importance of the MatrixN classes)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants