Upload Large Files in Background Asp Mvc
- Download WinFileUpload - 300.3 KB
- Download MVCServer - eight.7 MB
Introduction
Sending large files to an MVC/Web-API server tin can be problematic - this article is nearly an alternative. The approach used is to break a large file up into pocket-sized chunks, upload them, then merge them dorsum together on the server - file transfer by partitioning. The article shows sending files to an MVC server from both a webpage using JavaScript, and a Web-grade httpclient, and can exist implemented using either MVC or WebAPI.
In my feel, the larger the file you demand to upload to a website/api, the bigger the potential issues you see. Even when you put the right settings in place, accommodate your web.config, make certain you use the right multiplier for maxRequestLength and maxAllowedContentLength and of course, don't forget about executionTimeout (eeks!), things can yet become incorrect. Connections can fail when the file is *almost* transferred, servers unexpectedly (Murphys law) run out of space, etc., the list goes on. The diagram below demonstrates the bones concept discussed in this article.
Background
The concept for this solution is very simple. The attached code works (I take it started in production), and can be improved by you in many ways. For example, for the purposes of this article, the original large file is broken into circa 1MB chunks, and uploaded to the server sequentially, i chunk at a fourth dimension. This could, for case, be made more efficient by threading, and sending chunks in parallel. Information technology could also be made more robust by adding fault tolerance, auto-resume into a rest-api architecture, etc. I get out you to implement these features yourself if you demand them.
The lawmaking consist ii parts - the initial file-split/segmentation into chunks, and the concluding merge of the chunks back into the original file. I volition demonstrate the file-split up using both C# in a web-course, and JavaScript, and the file-merge using C# server-side.
File Split
The concept of splitting a file is very basic. Nosotros transverse the file in a binary stream, from position nada, upwardly to the terminal byte in the file, copying out chunks of binary information along the way and transferring these. Generally, we set an arbitrary (or advisedly thought out!) chunk size to extract, and utilize this as the amount of data to accept at a time. Anything left over at the end is the last chunk.
In the example below, a chunk size of 128b is fix. For the file shown, this gives the states 3 x 128b chunks, and 1 x 32b. In this instance, in that location are four file chunks resulting from the divide and to transfer to the server.
C# File Split
The accompanying demo "WinFileUpload" is a simple Windows Forms application. Its sole role is to demonstrate splitting a sample large file (50MB) in C#, and using a HTTPClient to postal service the file to a web-server (in this case, an MVC server).
For this C# example, I have a class called Utils, which takes some input variables such as maximum file chunk size, temporary binder location, and the name of the file to split. To split the file into chunks, nosotros phone call the method "SplitFile". SplitFile works its fashion through the input file and breaks it into dissever file chunks. Nosotros and then upload each file chunk it using "UploadFile".
Utils ut = new Utils(); ut.FileName = " hs-2004-15-b-full_tif.bmp"; ut.TempFolder = Path.Combine(CurrentFolder, " Temp"); ut.MaxFileSizeMB = one; ut.SplitFile(); foreach (string File in ut.FileParts) { UploadFile(File); } MessageBox.Show(" Upload complete!");
The file upload method takes an input file-name, and uses a HTTPClient to upload the file. Annotation the fact that nosotros are sending MultiPartFormData to bear the payload.
public bool UploadFile(cord FileName) { bool rslt = imitation; using (var client = new HttpClient()) { using (var content = new MultipartFormDataContent()) { var fileContent = new ByteArrayContent(Arrangement.IO.File.ReadAllBytes(FileName)); fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue(" attachment") { FileName = Path.GetFileName(FileName) }; content.Add(fileContent); var requestUri = " http://localhost:8170/Home/UploadFile/"; try { var result = customer.PostAsync(requestUri, content).Result; rslt = truthful; } grab (Exception ex) { rslt = fake; } } } return rslt; }
And so, that's the supporting code out of the style. 1 of the disquisitional things to be aware of next is the file naming convention that is being used. Information technology consists of the original file-name, plus a code-parsable tail "_part." that will be used server-side to merge the different file chunks back into a single contiguous file again. This is simply the convention I put together - you can change it to your own requirements, just be sure you lot are consequent with it.
The convention for this example is:
Proper name = original proper noun + ".part_N.Ten" (Due north = file role number, X = total files)
Here is an example of a film file split into three parts:
- MyPictureFile.jpg.part_1.3
- MyPictureFile.jpg.part_2.3
- MyPictureFile.jpg.part_3.3
It doesn't matter what guild the file chunks are sent to the server. The of import thing is that some convention, like the to a higher place is used, so that the server knows (a) what file part information technology is dealing with and (b) when all parts have been received and tin can be merged back into i large original file once more.
Next, here is the meat of the C# lawmaking that scans the file, creating multiple chunk files ready to transfer.
public bool SplitFile() { bool rslt = false; string BaseFileName = Path.GetFileName(FileName); int BufferChunkSize = MaxFileSizeMB * (1024 * 1024); const int READBUFFER_SIZE = 1024; byte[] FSBuffer = new byte[READBUFFER_SIZE]; using (FileStream FS = new FileStream(FileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { int TotalFileParts = 0; if (FS.Length < BufferChunkSize) { TotalFileParts = ane; } else { float PreciseFileParts = ((float)FS.Length / (float)BufferChunkSize); TotalFileParts = (int)Math.Ceiling(PreciseFileParts); } int FilePartCount = 0; while (FS.Position < FS.Length) { string FilePartName = String.Format(" {0}.part_{i}.{2}", BaseFileName, (FilePartCount + 1).ToString(), TotalFileParts.ToString()); FilePartName = Path.Combine(TempFolder, FilePartName); FileParts.Add(FilePartName); using (FileStream FilePart = new FileStream(FilePartName, FileMode.Create)) { int bytesRemaining = BufferChunkSize; int bytesRead = 0; while (bytesRemaining > 0 && (bytesRead = FS.Read(FSBuffer, 0, Math.Min(bytesRemaining, READBUFFER_SIZE))) > 0) { FilePart.Write(FSBuffer, 0, bytesRead); bytesRemaining -= bytesRead; } } FilePartCount++; } } return rslt; }
That's it for the C# client-side - we will encounter the result and how to handle things server-side later in the commodity. Side by side, let's expect at how to do the same thing in JavaScript, from a spider web-browser.
JavaScript File Divide
NB: The JavaScript code, and the C# Merge code are independent in the attached demo file "MVCServer".
In our browser, we have an input control of type "file", and a button to phone call a method that initiates the file-split and information transfer.
< input type =" file" id =" uploadFile" name =" file" / > < a course =" btn btn-primary" href =" #" id =" btnUpload" >Upload file< /a >
On certificate ready, nosotros bind to the click result of the button to call the main method:
$(document).set(office () { $(' #btnUpload').click(function () { UploadFile($(' #uploadFile')[0].files); } ) });
Our UploadFile method does the work of splitting the file into chunks, and as in our C# case, passing the chunks off to another method for transfer. The primary difference here is that in C#, we created individual files, in our JavaScript example, nosotros are taking the chunks from an array instead.
function UploadFile(TargetFile) { var FileChunk = []; var file = TargetFile[0]; var MaxFileSizeMB = 1; var BufferChunkSize = MaxFileSizeMB * (1024 * 1024); var ReadBuffer_Size = 1024; var FileStreamPos = 0; var EndPos = BufferChunkSize; var Size = file.size; while (FileStreamPos < Size) { FileChunk.push button(file.slice(FileStreamPos, EndPos)); FileStreamPos = EndPos; EndPos = FileStreamPos + BufferChunkSize; } var TotalParts = FileChunk.length; var PartCount = 0; while (chunk = FileChunk.shift()) { PartCount++; var FilePartName = file.name + " .part_" + PartCount + " ." + TotalParts; UploadFileChunk(chunk, FilePartName); } }
The UploadFileChunk takes the role of the file handed by the previous method, and posts it to the server in a similar manner to the C# example:
office UploadFileChunk(Chunk, FileName) { var FD = new FormData(); FD.append(' file', Chunk, FileName); $.ajax({ type: " POST", url: ' http://localhost:8170/Abode/UploadFile/', contentType: fake, processData: false, information: FD }); }
File Merge
NB: The JavaScript lawmaking, and the C# Merge code are contained in the attached demo file "MVCServer".
Over on the server, be that MVC or Web-API, we receive the individual file chunks and need to merge them back together once more into the original file.
The first thing we do is put a standard POST handler in place to receive the file chunks being posted upward to the server. This lawmaking takes the input stream, and saves it to a temp folder using the file-proper name created by the customer (C# or JavaScript). Once the file is saved, the lawmaking and so calls the "MergeFile" method which checks if it has enough file chunks available still to merge the file together. Notation that this is merely the method I accept used for this commodity. Y'all may decide to handle the merge trigger differently, for example, running a job on a timer every few minutes, passing off to another process, etc. Information technology should be changed depending on your own required implementation:
[ HttpPost] public HttpResponseMessage UploadFile() { foreach (string file in Asking.Files) { var FileDataContent = Request.Files[file]; if (FileDataContent != null && FileDataContent.ContentLength > 0) { var stream = FileDataContent.InputStream; var fileName = Path.GetFileName(FileDataContent.FileName); var UploadPath = Server.MapPath(" ~/App_Data/uploads"); Directory.CreateDirectory(UploadPath); string path = Path.Combine(UploadPath, fileName); endeavour { if (System.IO.File.Exists(path)) System.IO.File.Delete(path); using (var fileStream = Arrangement.IO.File.Create(path)) { stream.CopyTo(fileStream); } Shared.Utils UT = new Shared.Utils(); UT.MergeFile(path); } catch (IOException ex) { } } } render new HttpResponseMessage() { StatusCode = System.Net.HttpStatusCode.OK, Content = new StringContent(" File uploaded.") }; }
Each time we call the MergeFile method, it start checks to meet if we have all of the file chunk parts required to merge the original file back together once more. It determines this by parsing the file-names. If all files are nowadays, the method sorts them into the correct lodge, and and so appends 1 to another until the original file that was split, is dorsum together again.
public bool MergeFile(string FileName) { bool rslt = false; string partToken = " .part_"; string baseFileName = FileName.Substring(0, FileName.IndexOf(partToken)); string trailingTokens = FileName.Substring(FileName.IndexOf(partToken) + partToken.Length); int FileIndex = 0; int FileCount = 0; int.TryParse(trailingTokens.Substring(0, trailingTokens.IndexOf(" .")), out FileIndex); int.TryParse(trailingTokens.Substring(trailingTokens.IndexOf(" .") + 1), out FileCount); string Searchpattern = Path.GetFileName(baseFileName) + partToken + " *"; cord[] FilesList = Directory.GetFiles(Path.GetDirectoryName(FileName), Searchpattern); if (FilesList.Count() == FileCount) { if (!MergeFileManager.Instance.InUse(baseFileName)) { MergeFileManager.Example.AddFile(baseFileName); if (File.Exists(baseFileName)) File.Delete(baseFileName); List<SortedFile> MergeList = new List<SortedFile>(); foreach (cord File in FilesList) { SortedFile sFile = new SortedFile(); sFile.FileName = File; baseFileName = File.Substring(0, File.IndexOf(partToken)); trailingTokens = File.Substring(File.IndexOf(partToken) + partToken.Length); int.TryParse(trailingTokens. Substring(0, trailingTokens.IndexOf(" .")), out FileIndex); sFile.FileOrder = FileIndex; MergeList.Add(sFile); } var MergeOrder = MergeList.OrderBy(s => southward.FileOrder).ToList(); using (FileStream FS = new FileStream(baseFileName, FileMode.Create)) { foreach (var clamper in MergeOrder) { attempt { using (FileStream fileChunk = new FileStream(chunk.FileName, FileMode.Open)) { fileChunk.CopyTo(FS); } } take hold of (IOException ex) { } } } rslt = true; MergeFileManager.Example.RemoveFile(baseFileName); } } return rslt; }
Using the file split on the client-side, and file-merge on the server-side, we at present have a very workable solution for uploading large files in a more than secure manner than simply sending up in one big block of data. For testing, I used some large image files converted to a BMP from a hubble picture here. If the commodity is useful to you, please give information technology a vote at the top of the page! :)
History
- 29/09/2015 - Version ane
sevillaprucestras.blogspot.com
Source: https://www.codeproject.com/Articles/1034347/Upload-Large-Files-to-MVC-WebAPI-using-Partitionin
0 Response to "Upload Large Files in Background Asp Mvc"
Publicar un comentario