label> form> in the background ASP program, previously acquisition form Submitted ASCII data is very easy. However, if you need to get uploaded files, you must read it using the binaryRead method of the Request object. BinaryRead method is a binary reading of the current input stream to specify the number of bytes. It is a bit necessary to note that once the binaryRead method is used, it cannot be used with the request.form or request.QueryString collection. Combined with the Totalbytes property of the Request object, all of the data submitted by all forms can be made into binary, but the data is encoded.
Let us first take a look at how these data is encoded, there is no laws, and the code code, in the code, we convert binaryRead to the binary to text, output, in the background of UPLOAD.ASP (note This example does not go upload a big file, otherwise it may cause a browser to die): <% DIM BIDATA, PostDataSize = request.totalbytesbidata = request.binaryRead (size) PostData = binarytostring (biData, size) response.write "
" & PostData & "< pre>" Use pre-output format 'to convert binary stream into text Function binarytoString (BIDATA, SIZE) Const Adlongvarchar = 201 set RS = CreateObject ("adoDb.recordset") Rs.Fields. Append "mbinary", Adlongvarchar, Size Rs.open Rs.AddNew RS ("mbinary") = RS.Update binarytostring = RS ("mbinary"). Value rs.closend function%> Simple, upload one Simple text file (g: /homepage.txt, content is "Baoyu: http://www.webuc.net") to test, the text box filename retains the default "default filename", submits the output of the output: ---------------------------- 7D429871607FEContent-disposition: form-data; name = "file1"; filename = "g: / homepage .txt "Content-Type: TEXT / Plain Baoyu: http://www.webuc.net ----------------------------- 7D429871607FEContent-disposition: form-data; name = "filename" default filename ----------------------------- 7 D429871607FE - It can be seen for the project in the form, "--------------------------- 7D429871607FE" border To separate a piece of one, every piece of the beginning has some description information, for example: content-disposition: form-data; name = "filename", in the description information, you can know the Name of the table by name = "filename" . If there is a content such as filename = "g: /Homepage.txt", it is an uploaded file. If it is an uploaded file, then the information will be multi-Type: text / place to describe the content-type of the file. . The description information and main body information are separated by a wrap.
Well, it is basically clear. According to this rule, we know how to separate the data, and then process the separated data, but almost ignore a problem, which is boundary value (in the previous example ------- ---------------------- 7D429871607FE ") How do you know? Every time you upload this boundary value is different, it is also good, and the ASP can be obtained through the request.servervariables ("http_content_type"), such as the HTTP_CONTENT_TYPE content in the upper example: "Multipart / Form-data; boundary = ------------------------ 7D429871607FE ", there is this, we can not only determine if there is an encType =" Multipart / Form- Data "(if not used, then there is no need to execute below), you can also get boundary value boundary = ------------------------- 7D429871607FE. (Note: The boundary value obtained here is less "-", it is best to add more than the boundary value above.) As for how to analyze data, I don't have to repeat it, nothing more than the means of INSTR, MID, etc. To separate the data we want. Second, block upload, record progress to reflect the progress bar in real time, the essence is to know how much data has been obtained in real time? Let me recall the process we implemented, we are implemented via request.binaryRead (Request.TotalBytes), we cannot know how much data obtained by the current server in the process of Request.TotalBytes. Therefore, it can only be changed, if we can divide the obtained data into a piece, then according to the number of blocks already uploaded, we can calculate how big it is currently uploaded! That is, if I am 1K is 1, then upload 1MB input stream is divided into 1024 blocks. For example, I have now got 100 blocks, then it is currently uploaded 100K. When I proposed a block, many people think it is incredible because they all ignore the binaryRead method not only can read the specified size, but also can be read continuously.
Write an example to verify the integrity of the block read, on the basis of just the example (Note that the example does not go upload big files, otherwise the browser will die): <% DIM BIDATA, POSTDATA, TOTALBYTES, ChunkByteschunkbytes = 1 * 1024 'Piece size of 1 ktotalbytes = request.totalbytes' total size PostData = "" converted to text type data readedbytes = 0 'initialized to 0' block read Do while readedBytes TotalBytes Then ReadedBytes = TotalBytesLoopResponse.Write "" & PostData & " pre>" Use pre-output format 'Transforms binary stream into text Function BinaryTostring (Bidata, size) const admongvarchar = 201 set = createObject ("AdoDb.Recordset") rs.fields.Append " Mbinary ", Adlongvarchar, Size Rs.open Rs.Addnew RS (" mbinary "). Appendchunk (bidata) rs.Update binarytostring = rs (" mbinary "). Value rs.closend function%> Test the text file uploaded, The output results prove that the contents of the block read are complete, and in the While loop, we can record the current state to the Application while cycles, and then we can easily access the Application to dynamically obtain the upload bar.
Also: In the above example, it is spliced by a string. 1 'BinaryDo While ReadedBytes TotalBytes Then ReadedBytes = TotalBytes Application ( "ReadedBytes" = ReadedBytesloop 3, save uploaded files to get submitted data via request.binaryRead Binary flow preservation becomes a document. For text data, you can save text data to the file through the Write method of the TextStream object. For text data and binary data, it is convenient to convert each other. For uploading small files, the two are basically no difference. However, there are still some differences in two ways. For AdoDB.Stream objects, all data must be loaded until you can save it, so use this way. If the uploaded large file will take place, for the TextStream object, After the file is created, a part of the Write is part of the WRITE, which is the benefit of the server memory space, combined with the block acquisition data principle, we can get a write of uploaded data Write to the file in. I have done a test, and the machine has been uploaded up to more than 200 MB files. I used the first way to use the memory has been rising. I finally prompted that the computer virtual memory is insufficient. The most hate is even if the progress bar indicates that the file has been uploaded, but finally The file is still not saved. One way to use, the memory substantially has no changes during the upload process. Fourth, unresolved puzzle I saw Bestcomy on the blog to describe his ASP.NET uploading component is not related to Sever.SetTimeout, and I am not doing in the ASP, I only have to go upload big files. Server.setTimeout is set to a big value. I don't know if there is no better solution. If we use the Write method of the TextStream object when saving the file, then if the user is uploaded, the file transfer has been uploaded, if it is still there, if you can break it, it will be better. Key Issues are the request.binaryRead method, although it can be read, but can not skip a certain segment! 5. The principle of ending language is basically clear, but the actual code is much more complicated than this. Consider a lot of problems, the trouble is analyzing the data, for each piece of data, to analyze if it is a description information, it is The form project is still uploaded, whether the file has been uploaded ... I believe that according to the above description, you can also develop your own powerless component upload components.