π§ͺ Temporary Table in Entity Framework ExtensionsControl how staging tables are created and used
Temporary tables act as a staging area to speed up bulk operations in EF Core before the final bulk insert, update, delete, or merge.
With Entity Framework Extensions, you have full control over how these tables are created, named, optimized, and even persisted.
Use the options below to decide whether you want faster inserts, stable naming, or raw speed with table locks.
π Overview of Options
Hereβs a quick guide to every available option in Entity Framework Extensions when using the temporary table strategy with your EF Core DbContext
. Click to jump directly to details:
π·οΈ Naming & Schema
- TemporaryTableName β set a custom name for the temporary table
- TemporaryTableSchemaName β choose the schema for the temporary table
- TemporaryTableUseSameName β reuse the same table name instead of generating one
π οΈ Creation & Lifecycle
- TemporaryTableCreate β control whether the temporary table should be explicitly created
- TemporaryTableCreateAndDrop β control whether the temporary table should be explicitly created
- TemporaryTablePersist β keep the temporary table alive after the operation (for debugging/inspection)
- TemporaryTableIsMemory β create the temporary table as a memory-optimized table (if supported)
- UsePermanentTable β use a real permanent table instead of a
#temporary_table
β‘ Performance & Batching
- TemporaryTableBatchByTable β dynamically set the batch size by dividing the total number of rows by the specified number of batches.
- TemporaryTableInsertBatchSize β define the number of rows per batch when inserting into the temporary table
- TemporaryTableMinRecord β set the minimum row count before switching to a temporary table strategy
- DisableTemporaryTableClusteredIndex β prevent creation of a clustered index on the temporary table (can improve performance in some scenarios)
π Locking & Concurrency
- TemporaryTableUseTableLock β lock the temporary table during inserts (default =
true
for optimal performance)
β‘ Decision Table
Goal | Best Options | Notes |
---|---|---|
Debug or inspect staged data | TemporaryTablePersist , UsePermanentTable |
Tables stay visible; temp persists only while connection is open; permanent persists until dropped manually |
Stable table naming | TemporaryTableName , TemporaryTableUseSameName , TemporaryTableSchemaName |
Useful for debugging/profiling; β οΈ risky with concurrent jobs (name conflicts) |
Faster inserts | TemporaryTableInsertBatchSize , DisableTemporaryTableClusteredIndex , TemporaryTableIsMemory |
Larger batch size = faster; disabling index speeds inserts but slows merge |
Avoid staging for small sets | TemporaryTableMinRecord (default = 20) |
Inline table used for <20 rows; β οΈ keep low due to SQL parameter limits |
Control table creation | TemporaryTableCreate , TemporaryTableCreateAndDrop |
Needed when providing your own TemporaryTableName ; CreateAndDrop is safest |
Improve throughput on large jobs | TemporaryTableUseTableLock (default = true) |
Default lock speeds inserts; can cause blocking with shared/global tables |
Split giant batches | TemporaryTableBatchByTable |
Divides work into multiple staging tables for very large datasets |
π·οΈ TemporaryTableName
Gets or sets the exact name to use for the temporary (or permanent) staging table.
If you provide a custom table name, you must also enable the option TemporaryTableCreate = true;
if you want the library to create the table.
If you want the library to create and drop the table automatically afterward, you can simply use TemporaryTableCreateAndDrop = true;
.
The name you choose determines the type of table that will be used:
"MyTemporaryTableName"
β uses a permanent table"#MyTemporaryTableName"
β uses a local temporary table"##MyTemporaryTableName"
β uses a global temporary table
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableName = "#MyTemporaryTableName"; options.TemporaryTableCreateAndDrop = true; });
π‘ Why it can be useful
Using your own temporary table name can be very handy in different scenarios, such as:
- Re-using the same table across multiple operations instead of creating and dropping it every time.
- Improving SQL query plan caching since the optimizer can recognize the same table name and reuse execution plans.
- Debugging and testing by making the temporary table visible with a predictable name.
- Coordinating across processes (for example, with global temp tables) when different sessions need to share data.
π·οΈ TemporaryTableSchemaName
Gets or sets the schema name to use for the staging table.
β οΈ When using a temporary table (local #
or global ##
), the schema name is always ignored due to SQL Server limitations.
This option should only be used when working with permanent tables.
π If you set a schema name while using a temporary table, it will simply be ignored without causing an error.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableSchemaName = "ZZZ"; options.TemporaryTableName = "MyTemporaryTableName"; // use a permanent table options.TemporaryTableCreateAndDrop = true; });
π‘ Why it can be useful
- Organize staging tables under a dedicated schema (e.g.,
"Staging"
or"Temp"
) instead of cluttering thedbo
schema. - Apply different permissions or security rules depending on the schema.
- Keep staging tables isolated from production objects, making maintenance and cleanup easier.
π·οΈ TemporaryTableUseSameName
Reuses the same temporary table name instead of generating a unique one.
When enabled, the library automatically builds the temporary table name by using the prefix #ZZZ_
followed by your destination table name.
For example:
- Destination table:
Customer
β Temporary table:#ZZZ_Customer
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableUseSameName = true; });
π‘ Why it can be useful
- Consistent table naming β Makes debugging easier since you always know what temporary table to look for.
- Query plan reuse β SQL Server can cache and reuse execution plans more efficiently when the table name stays the same.
- Better troubleshooting β Helpful if you need to capture or inspect intermediate data during bulk operations.
π·οΈ TemporaryTableCreate
Forces the library to explicitly CREATE
the temporary table when a custom TemporaryTableName
is set.
This is required because the library cannot know whether you plan to reuse an existing table or want it to create a new one.
When the TemporaryTableName
is automatically generated by the library, the table is already created and dropped for you β so you donβt need to set this option.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableName = "#MyTemporaryTableName"; options.TemporaryTableCreate = true; options.TemporaryTablePersist = true; });
π‘ Why it can be useful
- Control table creation β Useful if you want to ensure the staging table always exists before running a bulk operation.
- Prevent hidden issues β Avoids unexpected errors when the library assumes a table already exists but it doesnβt.
- Support reusable workflows β Handy in scenarios where the same temporary table is dropped and recreated multiple times during a process.
- Testing and debugging β Lets you explicitly manage the lifecycle of the staging table, making it easier to test different cases.
π·οΈ TemporaryTableCreateAndDrop
Forces the library to explicitly CREATE
the temporary table and then automatically DROP
it after the operation is complete.
This option is a shortcut that combines TemporaryTableCreate = true;
with automatic cleanup, so you donβt need to manage dropping the table yourself.
When the TemporaryTableName
is automatically generated by the library, the table is already created and dropped for you β so you donβt need to set this option.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableName = "#MyTemporaryTableName"; options.TemporaryTableCreateAndDrop = true; });
π‘ Why it can be useful
- One-step setup β Automatically creates and cleans up the staging table with a single option.
- Keeps your database clean β Prevents leftover temporary or staging tables from piling up.
- Great for testing β Ensures each test run starts fresh without manual cleanup.
- Reduces mistakes β No need to remember to drop the table later.
π·οΈ TemporaryTablePersist
Keeps the data in the temporary table after the bulk operation is finished.
By default, when the library creates a temporary table automatically, the table (and its data) is dropped at the end of the operation.
When you provide a custom TemporaryTableName
, the library also assumes it can safely drop it unless told otherwise.
Setting TemporaryTablePersist = true;
prevents this cleanup and ensures that the temporary table and its data remain available for as long as the database connection remains open.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.Database.OpenConnection(); context.BulkMerge(list, options => { options.TemporaryTableName = "#MyTemporaryTableName"; options.TemporaryTableCreate = true; options.TemporaryTablePersist = true; }); // You can now reuse or query "#MyTemporaryTableName" here context.Database.CloseConnection();
π‘ Why it can be useful
- Reuse populated data β Run multiple bulk operations (insert, update, merge, delete) against the same temporary table without reloading the data.
- Improve performance β Avoids the overhead of re-creating and re-filling the staging table for each step of a process.
- Chained operations β Ideal when several queries in sequence need to work on the same temporary dataset.
- Debugging & inspection β Keep the data visible in the temp table for troubleshooting during the session.
β οΈ Important note: Temporary tables and their data only persist while the connection is open. Once the connection is closed or disposed, SQL Server automatically drops the table and clears its contents.
π·οΈ TemporaryTableIsMemory
[TODO]
π·οΈ UsePermanentTable
Creates a permanent table instead of a temporary one. By default, the library uses temporary tables that are automatically dropped once the operation is finished.
When UsePermanentTable = true;
, the staging table is created as a regular permanent table.
If you also want to keep the data after the operation, combine it with TemporaryTablePersist = true;
.
In this case, the permanent table will not be dropped, allowing you to reuse the data until you decide to remove it manually.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.UsePermanentTable = true; options.TemporaryTablePersist = true; });
π‘ Why it can be useful
- Persist data beyond the connection β Unlike temp tables, permanent tables are not tied to the connection lifecycle.
- Reuse data across sessions β Keep the staging table available even if the connection closes or the application restarts.
- Integration workflows β Useful when another process, job, or service needs to consume the staged data later.
- Flexible cleanup β You decide when to drop the table, instead of SQL Server doing it automatically.
β οΈ Important note: Since the table is permanent, you are responsible for eventually dropping it manually if you donβt need it anymore, otherwise it will stay in your database.
π·οΈ TemporaryTableBatchByTable
[TODO]
π·οΈ TemporaryTableInsertBatchSize
Controls the batch size used when inserting data into the temporary (or permanent) staging table.
By default, the library chooses an internal batch size automatically. Setting TemporaryTableInsertBatchSize
allows you to override this value and fine-tune how many rows are sent per batch.
This option is especially useful when working with very large datasets or when your SQL Server has limitations on packet size or lock escalation.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableInsertBatchSize = 5000; // insert in chunks of 5,000 rows });
π‘ Why it can be useful
- Avoid timeouts β Large single inserts may hit server limits or take too long; batching makes it safer.
- Control memory usage β Smaller batches reduce memory pressure on both the client and the server.
- Balance speed and stability β You can adjust the batch size to find the sweet spot between throughput and reliability.
- Handle large imports β Breaking down massive imports into smaller pieces makes the process more predictable.
β οΈ Important note: Choosing a batch size thatβs too small may slow down performance, while a batch size thatβs too large may cause timeouts or locking issues. Test different values to find what works best for your workload.
π·οΈ TemporaryTableMinRecord
Specifies the minimum number of records required before the library creates and uses a temporary table.
If the number of records in your bulk operation is below this threshold, the library will automatically use an inline table instead of creating a staging table.
This helps avoid the overhead of creating and dropping temporary tables when dealing with very small datasets.
By default, the minimum is 20 records.
The default value is intentionally low because SQL Server has a maximum parameter limit, and inline tables are only efficient for small sets of values.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableMinRecord = 50; // use inline table if fewer than 50 records });
π‘ Why it can be useful
- Faster tiny operations β Inline tables are quicker when only a handful of rows are involved.
- Avoids unnecessary overhead β No need to create/drop a temp table for fewer than N records.
- Optimized defaults β Default value of 20 ensures inline tables are only used where theyβre truly efficient.
- Fine-tuned performance β You can adjust the threshold depending on your workload (e.g., raise it if you frequently process medium-sized sets).
β οΈ Notes & gotchas
- Inline tables rely on SQL parameters, so they are only practical for small record counts.
- If you set the threshold too high, you may hit SQL Serverβs maximum parameter limitation.
- For most scenarios, the default of 20 provides the best balance.
π·οΈ DisableTemporaryTableClusteredIndex
Disables the default clustered index that the library creates on the ZZZ_Index
column of the staging table.
By default, Entity Framework Extensions automatically adds a clustered index on ZZZ_Index
to help the merge process run faster and in a predictable order.
Setting DisableTemporaryTableClusteredIndex = true;
skips this step and creates the table without any clustered index.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.DisableTemporaryTableClusteredIndex = true; });
π‘ Why it can be useful
- Faster initial inserts β Skipping index creation can speed up loading large amounts of data into the staging table.
- Memory & disk efficiency β Avoids the overhead of maintaining a clustered index when itβs not needed.
- Troubleshooting β Useful when you want to test raw insert performance or see how the merge behaves without indexing.
β οΈ Notes & gotchas
- The default clustered index on
ZZZ_Index
usually improves merge/update performance. - Disabling it may cause slower joins between the staging table and the destination table, especially on large datasets.
- Recommended only for lightweight or experimental scenarios where the index offers no real benefit.
π·οΈ TemporaryTableUseTableLock
Requests a table-level lock (TABLOCK
) when inserting rows into the staging table (temporary or permanent).
This option is true
by default because the staging table is created by the library for the bulk operation, so it is usually safe to lock it exclusively.
// @nuget: Z.EntityFramework.Extensions.EFCore using Z.EntityFramework.Extensions; context.BulkMerge(list, options => { options.TemporaryTableUseTableLock = true; // default behavior });
π‘ Why it can be useful
- Faster inserts on large loads β Fewer lock acquisitions can significantly speed up bulk inserts.
- Lower locking overhead β Reduces the cost of row/page locks and avoids lock escalation.
- More predictable performance β A single table lock is often cheaper than many fine-grained locks during big batches.
- Safe default β Since the staging table is created exclusively for your operation, locking it wonβt block other processes in most scenarios.
β οΈ Notes & gotchas
- Blocks concurrent access: If you override this option for a global temp or permanent staging table that may be shared across sessions, it can block other users.
- Scope: This option only affects inserts into the staging table. It does not affect locks on the destination table during the final merge/update step.
- Tuning: In rare cases with very high concurrency, you may set this option to
false
to reduce blocking β but performance may be slower.
β Best Practices
For large datasets
- Increase
TemporaryTableInsertBatchSize
to reduce round-trips (test values between 5,000β20,000). - Keep the default clustered index unless youβre only inserting and donβt need fast joins.
- Increase
For small datasets
- Use the default
TemporaryTableMinRecord = 20
. Inline tables are faster under this threshold. - Donβt raise the value too high β SQL Server has a max parameter limit.
- Use the default
For debugging & inspection
- Enable
TemporaryTablePersist
to keep data alive while the connection is open. - Use
UsePermanentTable
+TemporaryTablePersist
if you need to share data across sessions. - β οΈ Remember to drop permanent tables manually after use.
- Enable
For concurrency
- Leave
TemporaryTableUseTableLock = true
(default) for best throughput. - If you expect multiple jobs sharing the same table (global temp or permanent), consider disabling it to avoid blocking.
- Avoid combining static naming options (
TemporaryTableName
,TemporaryTableUseSameName
) when running in parallel jobs.
- Leave
General tips
- Use
TemporaryTableCreateAndDrop
when you want one-step setup and cleanup. - Keep overrides minimal β the defaults are tuned to balance performance and safety in most workloads.
- Use
π― Conclusion
Temporary tables are at the core of how Entity Framework Extensions achieves high-performance bulk operations in EF Core. They give you the flexibility to balance speed, stability, and debugging visibility depending on your scenario.
With the right configuration you can:
- π Handle massive EF Core bulk loads efficiently using batch sizing, indexes, and table locks.
- π Skip staging for tiny sets with
TemporaryTableMinRecord
to avoid unnecessary overhead. - π Debug or inspect EF Core operations by persisting temp data or switching to permanent tables.
- π€ Stay safe under concurrency by managing locks and avoiding conflicting table names.
Most EF Core projects will work well with the defaults β theyβre designed to balance performance and safety. But when you hit edge cases (very large jobs, high concurrency, or advanced debugging), the options in this guide become your toolbox.
π Use the decision table as your quick roadmap, and the detailed sections when you need to fine-tune. That way, youβll always get the right mix of performance, reliability, and control for your EF Core bulk operations.
ZZZ Projects