Skip to content

Batch operations require all records to be of the same type. #4

@Bananas-Are-Yellow

Description

@Bananas-Are-Yellow

It looks like inTableAsBatch and inTableAsBatchAsync both take a sequence of Operation<'T>. This means that all the records I want to batch-insert have to be of the same type. This happens to be the case with your example of inserting 200 Game entities, but this is not the case for my situation.

If I understand Azure Table Storage correctly, the requirement for a batch insert is that all entities have the same partition key, however, the other properties present could be different. Is that correct?

In memory, I have a graph structure:

type Child = ...

type Parent = {
    Children: Child []
    ...
}

I can't store this directly in Azure table storage, so I have to define entity types:

/// represents a parent
type ParentEntity = {
    /// parent guid
    [<PartitionKey>] PartitionKey: string
    /// "" (empty)
    [<RowKey>] RowKey: string
    ...
}

/// represents a reference to a child
type ParentChildEntity = {
    /// parent guid
    [<PartitionKey>] PartitionKey: string
    /// child number
    [<RowKey>] RowKey: string
    /// child guid (PK of child)
    Child: Guid
}

I want to insert one ParentEntity row and many ParentChildEntity rows. These are logically one insert, since they all belong together, and they all have the same partition key.

How should I do this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions