D1385R1: LEWGI DH: Davis Herring LC: Lawrence Crowl RK: Ronan Keyrell WS: Bill Seymour OL: Ollie Lo DS: Dan Sutherland AT: Alan Talbot CC: Chris Connelly BvS: Brian van Straden JB: Jorg Brown GL: Graham Lopez S: Sebastien AL: Andrew Lumstine VR: Vincent Reverdy GD: Guy Davidson DL: Damien L-G EF: Eric Fiselier MH: Mark Hoemmen BS: Bob Steagall BL: Bryce Lelbach D1385R1: (13h58) BS (presenting): The scope is perhaps surprisingly narrow. Initial requirements (or goals) for "typical users" and "super users". Typical users want ease-of-use with mathematical syntax and reasonably high performance. Some have said operator overloading is a "must-have". GD: In the gaming community, they are already using such operators and would ignore a library without them. All the homegrown libraries are practically identical. BL: Is there prior art listed? GD: They're proprietary. BL: Please include whatever prior art you can in the next revision. BS: Super users value performance from customization. BS: Goals are mathematical vocabulary types/operations; a public intuitive interface; easy LAPACK-like performance; infrastructure for memory storage/layout (might also be extended to quaternions/octonions/tensors); customization facilities; ...at reasonable granularity. BL: Is there any discussion of the goals? MH: We'll have to see what the valid element types are. BS: We have to distinguish mathematical terms of art from overlapping computer science terms, especially matrix, dimension, rank, and vector. DH: Is rank tensor rank or matrix rank? BS: Matrix rank, but we try to avoid it. BS: We distinguish (indexed) arrays and (...) matrices. MH: We use extent in mdspan. BS: "MathObj" means a matrix, row_vector, or column_vector. An engine manages the resources for a MathObj (including exeution context), selected by template parameter but otherwise private. BS: Row size is number thereof, bounded by row capacity. Engines may be fixed-size or just fixed-capacity or neither. BS: We want a layered, iterative, and incremental approach, leaving higher-level functionality for further proposals on top of a foundational layer of matrix/vector arithmetic that supports new element types, engines, and arithmetic operations. BS: We do not address tensors because they have additional invariants and different interfaces. GL: Does the same apply to vectors? A matrix with one row or one column is not the same as a vector. What about outer products? BS: The outer product is the matrix outer product. GL: How I do get a vector, not a Kronecker, outer product? BS: Sorry -- we do compute the actual outer product. GL: But the Kronecker product has a different size. A row_vector is not a matrix, then. MH: The differences between mathematical objects are less important than the difference betwen mathematical objects and computer science objects. GL: But if we're going to be precise about matrices vs. tensors, we should be precise about operations too. BS: It depends on which operators you want. VR: From a differential geometry background, it's important to distinguish matrices and tensors. AL: A vector exists in the abstract without being a row or column vector. BS: I want a scalar product from row_vector*column_vector, not a 1x1 matrix. AL: But should * perform the Hermitian inner product for complex vectors? GD: Thanks for pointing that out. BS: Memory can come from various places and utilize fancy pointers. Ownership and capacity (as distinct from size) are also important. In MKL LAPACK, there are 18 parameters to the SVD function. I can add a row when doing multiple regression. MH: In some applications, you incrementally update the factorization of a matrix. BS: We have the usual C/Fortran element order issue and 0/1-based indexing. AL: There's no debate about 0/1. BL: We can take a poll. BS: Some people want arbitrary bases. RK: In Fortran you can pick any start. MH: Some libraries use negative indices to wrap around. BS: We have to support heterogeneous operations between elements via "element promotion". The underlying engines might be different, so we do "engine promotion" (to dynamically-resizable if necessary) too. MH: So Nx3 * 3x3 is Nx3 (with only N dynamic). DH: What about engines with different execution contexts? BS: We've focused on the storage issue (adding execution context only Monday evening). BS: We want to customize for performance based on element layout or parallelism/SIMD, so we have "operator traits promotion" as well. BS: We haven't done constexpr yet, but we can do it. BS: That's the introduction. BL: Are there examples to look at before the synopsis? MH: We have test cases. BS: We have trivial examples of usage. BL: Let's look at those next. BS: We have mixed element types and then mixed engine types. BL: What happens with temporaries? Is there any DSEL? BS: The design allows for it. DH: Should we avoid auto for subtleties of memory management? BS: It's allowed to return something convertible to the matrix type. BL: This is the valarray issue; no one uses it due to such ineffiencies. MH: You'd have to make sure to assign, not return by auto. DH: Can operations on rvalue re-use storage? BS: It's not disallowed. MH: The customization and free-function interface could be used to control element types and storage. BL: Eigen avoids temporary copies but also produces an optimizable expression tree. MH: Eigen sometimes created temporaries on purpose. BS: Various traits compute the types and values for an operator. BS: is_complex detects complex. MH: numeric_limits can do that. GD: OK, less wording for us. BS: Only arithmetic or complex types are allowed as elements. BS: matrix_element_promotion selects result types. BL: Why not common_type? MH: That picks complex between it and double. BL: Is this a customization point? So all engines use this trait? BS: Yes. You can use it to multiply a matrix of doubles by a row_vector of bignums. BL: common_type should support that. BS: If we don't end up needing the trait, that's fine. BvS: Do we need the operator()(i,j)? Distributed memory can't do that. BS: You can always not instantiate them, or we could SFINAE them away. BL: What's size_tuple? BS: A convenience for things like capacity(). S: Why disallow 0-sized matrices (for forwarding)? BS: I'll have to think about it. fs_matrix_engine is supposed to be simple; maybe another one supports that. BL: Does it hold data? BS: It contains its own data. BL: You can't do alignment or padding. BS: Correct. DH: The mp_bias is undefined behavior. BL: We'll poll 1-based indexing. RK: What about operator[]? BS: Not now; maybe once we can have two arguments. RK: Generic code will already be using it. OL: What about setting capacity? BS: It's fixed size; the capacity is equal to it. GL: Why size_t instead of int? BL: It's precedence and consistency; int is the modern advice, but maybe we can get away with it. GL: So there's no reason particular to this situation. BL: Why are the operator() overloads not noexcept? BS: It's a good question; they have a precondition. GD: Contracts? BL: Not in the standard library yet. MH: The traits and hooks are more important than the details of an engine, right? BL: Yes, but we should look at the details too. DH: Why are things like is_rectangular types instead of constexpr values? BS: Consistency with allocator_traits (like POCMA) and pointer_traits. BL: Can you copy a fixed-size engine to a dynamic engine? BS: Yes, and vice versa if the sizes are right. BL: Then you need constructors. BS: The matrix knows how to copy the engines. BS: dyn_... looks like fs_matrix_engine except for having resize/etc. VR: Why not use extent from mdspan? BS: This is a strawman implementation. We want to find out whether the traits system, etc. makes sense. VR: You'll need a concept of an engine, which will depend on things like an extent. BL: Are these engines actually part of the proposal or merely examples? BS: The proposal says there should be basic fixed-size and dynamic engines; these are examples of what they might look like. (break at 15h01) (resume at 15h21) GL: Now that I've had time to think about it: at the very top level, how attached are the authors to the idea that there need to be separate row_vector and column_vector types? BS: It's not a matter of personal attachment but of practicality of interface: row_vector*column_vector produces effectively a scalar. If I treated them as matrices, I would get a 1x1 matrix from which one would have to extract the single element. AL: A function called dot could give the scalar and wouldn't be too ugly. GL: Why wouldn't operator* work? BS: If the size were always known statically, you could overload * so as to get a scalar. With runtime sizes, that doesn't work. GL: But vector*vector is a scalar. AL: Unless it's the outer product. GL: But * could mean inner product everywhere; it's the most common. MH: This is the risk of using an operator. GL: I can't read * without Hungarian notation if it might be an outer product. BL: Would you be happier if we had no *? GL: That's one way; if you do that, there's no need for two vector types. Vectors are vectors, and having to distinguish them is useless. BL: I presume row_vector and column_vector have the same interface. GL: But I have to keep up with which is which. AL: But * can't be used for inner product anyway. BL: If I have a function on a vector, is it templated on a concept or accept a base class? We have two types that are the same thing just to pick the right overload? BS: Yes. BL: Even with one type with a row/column tag, you have to distinguish. GL: If you can keep track of the types, * isn't ambiguous. OL: Do you have elementwise multiplication? BS: No; we don't define how multiplication should work. DH: There is a consistent definition of * here for GL: Outer product means something different for vectors and for 1xn, nx1 matrices. DH: As embedded into the space of matrices, you get the matrix outer product. GL: I disagree. DH: Using static/dynamic extents like mdspan would allow specializing to select a scalar result as appropriate. EF: Does the grabbiness of the operator templates cause problems in a real implementation? BS: We checked with a strawman operation. EF: When you put this into std, they compete with many other operators even for irrelevant operands. GD: We have Kronecker products and Hadamard products; we want * for matrix multiplication. Ideally we would have fancy Unicode operations for the exotic products, or just free functions. GL: This is a weird middle space. You could just say that you have matrices, or 0/1/2-dimensional things, and inner product works on everything, or you can have matrices and vectors, and operator * means something else there. The blending isn't obvious. GD: The specific types offer optimization opportunities and may be convenient for users. GL: So you're advocating for not having any distinction and neither row_vector nor column_vector. GD: No I'm not. GL: But this way you're getting the ambiguity and the complication. BvS: The vector types could just be aliases. GD: My initial implementation did that, but it caused problems. BL: We could disambiguate with extents. BS: If the extents work, we could go down to two primary types. S: If vector is just an alias, we seem to need tensors for consistency. We can't add them later for ABI reasons. BL: Why wouldn't we just use mdspan for tensors? S: It could be the class that unifies the concepts. AL: Again, just use a function for dot product (and dot_conjugate). Then vector can be vector. EF: I see static_asserts -- should we be using concepts instead? BL: For this proposal's timeframe, yes. MH: I argued last night that mdspan is really a low-level multidimensional array, not necessarily a matrix. We use our equivalent as if it were such a tensor in a BLAS-like fashion. There's room above it, even if it's not my priority. S: So you can't use mdspan to unify these concepts. MH: You can use it to unify the storage only. GL: It's not necessarily a typedef unification. MH: mdspan lets you make views of rows/columns/etc. BS: The row_vector/column_vector/matrix classes wrap the engines, with SFINAE for resizing. EF: It looks properly dependent. BL: Let's look at the traits. BS: Next is engine promotion traits per operation; for example, it picks an element type, an allocator for that, and makes a dynamic engine for that. EF: Please give long names of template parameters. I want to see if there might be conflicts specializing on types I don't own. BS: There are higher-level traits later. DH, CC: Template parameter names can reduce visual noise. VR: More generally, I think mdspan is a view on storage. Why can't a matrix combine it with a standard container? MH: mdspan allows owning pointer types. BL: We'll have to poll on that. S: If I had a CPU and a GPU matrix, would they have different engines? BS: Probably. S: Then they couldn't be compatibly promoted. I probably don't want an operation on them. BS: If you want to prevent that operation, you don't create the appropriate traits type. By default it doesn't work. OL: Is "matrix_engine_multiply_promotion" the same as "matrix_multiplication_engine_promotion"? BS: Yes; typo. BS: Each of the four arithmetic operation does engine promotion. The top-level types take as template arguments the engine and an operator traits type. The latter comprises traits for different arithmetic operations to allow customization of each operator's algorithm; they aren't used directly by the class template. DH: Is ",OT" missing from transpose_type? BS: Yes. MH: Couldn't they be different? GL: They'd have to be different for row_vector/column_vector. BS: We define hermitian_type to be transpose_type for real matrices. DH: Shouldn't you transpose the complex case too? BS: You have to copy it anyway. DH: Then it's a view sometimes and a value sometimes. BS: Jeff Hammond, I think, made the point that the hermitian transpose and regular transpose are both useful for complex matrices. GL, AL: Indeed. BL: We're going to finish presenting and wrap up soon to go to concurrent queues. BS: The next layer of traits are arithmetic traits that perform the operations. BL: Do we have to specialize these? BS: If you want to customize. BL: Eric and I were concerned earlier that having these traits in std invites conflicting customizations. BS: There's a way coming to customize them. BL: For all the traits so far? BS: Yes. EF: Are OP1 and OP2 in matrix_subtraction_traits the operands? BS: Yes. S: I'm a bit concerned about having a set of traits for each operator. Machine learning defines hundreds of operators to specialize for. BS: The rest of the system to be presented might help. BS: matrix_multiplication_traits handles, among other things, producing an unboxed scalar. GL: I think you can do that with a single vector type. BS: Yes, I can get the inner product that way. How do I represent the outer product? GL: With a function. AL: You'll have to do inner with a named function. EF: Is void, as seleceted here, a generally valid engine type? BS: There is no engine type in this case since we return a scalar. The caller knows not to use that typedef-name; it's there for documentation. VR: I wonder if we are missing a lower-level abstraction than matrices. We have mdspan and containers; we want linear algebra. Do we just need a mathematical_array out of which to build vectors and matrices? AL: Does it matter what's under the top interface layer? VR: Having the mathematical objects in the names is a complication. AL: I think mathematical_array should be a concept. EF: Have we discussed existing practice like BLAS? GD: There's boost::mpblas, but it hasn't been touched since 2009. BS: Finally, we have an default_matrix_operator_traits container (in std or std::la) which is the default operator traits type for matrix. BL: So the lower-level traits are not customization points. BS: This is why we have operator traits promotion. If one set of operator traits evaluates eagerly and the other produces expression templates, the promotion lets the user say how to combine them. EF: We should then avoid suggesting specializing the others. BL: Just prefix them with "default_". BL: We need to wrap up for concurrent queues. BS: We have an example of how all the traits work together to pick a result type for a very heterogeneous multiplication. EF: Is this "template<>" thing an example implicit specialization? BS: Yes. BS: You specialize std::is_arithmetic to allow a new element type. DH: Can you specialize is_arithmetic? BL: People do. EF: Maybe you can't specialize it. BS: You can go for is_element instead. BS: There are examples for engine promotion traits interested only in limited types, and for explicit specializations to provide custom operator implementations for certain types. EF: There aren't any user-defined types here, so you can't specialize it. BS: But you can put it in a custom operator traits container. (polls at 16h19) We want 0-based indexing as opposed to 1-based indexing. (unanimous: 20) EF: Boost has matrix_column and matrix_row. AL: Those are actual pieces of matrices. We like having separate row_vector and column_vector types in addition to matrix. SF F N A SA (21 present) 3 0 5 4 4 BS: I like the idea of recognizing static row or column extents of 1. We want explicitly named operations (e.g., dot and outer) in addition to operators. SF F N A SA (21 present) 8 5 2 1 0 JB: I like the existing row_vector/column_vector approach. BL: If we use extents, we get rid of fs_ and dyn_ types. GD: mdspan is non-owning. BL: But to be consistent, you don't want dynamic extents in a fs_ type. MH: With all static extents, you wouldn't have an allocator. BL: We're just asking the authors to explore it. VR: Can you be fixed-size in one dimension and not in the other? GD: We're proposing such an engine. BS: You could make a fixed_vector-like engine. Define engine/matrix classes in terms of mdspan + storage and mdspan concepts (e.g., extents), and expose an mdspan-esque interface. This implies that fs_ and dyn_ are combined into one template parametrized on extents (which are either static or dynamic). SF F N A SA (22 present) 6 2 7 0 0 EF: We've been asking for not only implementation experience but usage experience. Ideally, not just tests but something on GitHub to let people try for themselves. BL: We need prior art for gaming and graphics uses. MH: I think it helps to review Eigen and uBLAS and explain differences. EF: It might just be language versions. CC: It's good to know why things are different. (done at 16h33)