dbox
Junior Member
Posts: 89
|
Post by dbox on Oct 7, 2022 18:59:25 GMT
I've recently been looking into QBasic File I/O more closely and was curious about RANDOM vs BINARY mode. Typically, I use BINARY mode any time I need to read or write binary files in QB. It seems as though there is a lot of overlap in functionality with BINARY being the more flexible of the two as you don't have to specify a record size. However, you can read and write custom types in BINARY mode, which seems to me to give you the same functionality.
Anyone have a specific use case where you prefer RANDOM over BINARY? Know of any examples that use RANDOM?
|
|
dualbrain
Junior Member
The only bug free code is code that is no longer used.
Posts: 51
|
Post by dualbrain on Oct 7, 2022 19:26:21 GMT
From what I recall RANDOM is more about using a file as a "database"; where you have a fixed size record and can easily use the "index" to read/update these records. BINARY, on the other hand, I believe allow you to mix and match as you see fit. You could, of course, use BINARY as a replacement to RANDOM, but you would have to calculate the offset accordingly - where RANDOM is a *record* index making it very easy to manage a records type of data file. To say this another way, with RANDOM you can simply specify 5 for the record parameter and GET will "SEEK" automatically to the correct byte position; where with BINARY you aren't really able to utilize the "record" parameter of GET and have to use SEEK if you want to do random access.
The cases where I used RANDOM (which was most of the time) is any time I had a TYPE that was used as a data blob where I would have 2 or more of them in a file (customer database). For BINARY, this was used less often (for me) and would be when I had some sort of custom file format where I would have multiple "blobs" (of differing sizes) in a single file and GET/PUT could read these different blobs (with different sizes/content) as needed. A concrete example of BINARY is reading the RIFF format of a WAV audio file... and once I found the correct byte location, could then use seek the correct location and read the information as needed. For those not aware, RIFF sections in a WAV file are of different sizes - typically aligning either with a WinAPI struct or raw content such as audio (duh!) and possibly thumbnail, lyrics or anything else. Some of these would be custom third-party structs and it's easier to read these into a TYPE then having to read each individual data element.
|
|
dbox
Junior Member
Posts: 89
|
Post by dbox on Oct 8, 2022 15:17:44 GMT
When using custom types the benefit of RANDOM seems somewhat negligible. It will keep track of the record offsets for you but calculating the offset is fairly trivial.
Compare writing and reading with RANDOM mode:
Type customer age As Integer phone As String * 10 End Type
Dim Shared cust As customer, recLEN recLEN = Len(cust) 'get the length of the record type Print "RecLEN:"; recLEN
Open "datafile.dat" For Random As #1 Len = recLEN For i = 1 To 4 Read cust.age, cust.phone Put #1, , cust Next Close #1
Open "datafile.dat" For Random As #2 Len = recLEN Get #2, 3, cust Close #2 Print cust.age, cust.phone
End
Data 59,2223456789,62,4122776477,32,3335551212,49,1234567890
... vs BINARY:
Type customer age As Integer phone As String * 10 End Type
Dim Shared cust As customer, recLEN recLEN = Len(cust) 'get the length of the record type Print "RecLEN:"; recLEN
Open "datafile2.dat" For Binary As #1 For i = 1 To 4 Read cust.age, cust.phone Put #1, , cust Next Close #1
Open "datafile2.dat" For Binary As #2 Get #2, 2 * recLEN + 1, cust Close #2 Print cust.age, cust.phone
End
Data 59,2223456789,62,4122776477,32,3335551212,49,1234567890 I guess it kind of just comes down to whether you want to define the record length on the OPEN or just calculate the read/write offset when you GET or PUT.
Perhaps part of the reason for both was for backwards compatibility with GW-BASIC. It only had RANDOM mode from what I can determine. You could also define a file record with the FIELD keyword (since GW-BASIC didn't have custom types). It looks like support for FIELD was dropped with Visual Basic. I never used that method myself but it is an interesting way to handle multi-field records without custom types:
CONST NM = 30, AD = 30, CT = 15, ST = 2, ZC = 5 ' Define field and record lengths with constants. CONST RLEN = NM + AD + CY + ST + ZC ' OPEN "ADDRESS.DAT" FOR RANDOM AS #1 LEN = RLEN FIELD #1, NM AS CName$, AD AS Address$, CY AS City$, ST AS State$, ZC AS Zip$ FIELD #1, RLEN AS Clist$ 'define entire record
GET #1, 1 'GET does not need a variable to read FIELD records! 'Read file for zip codes from 15230 to 15239 . DO WHILE NOT EOF(1) ZipCheck$ = Zip$ 'read zip codes IF (ZipCheck$ >= "15230" AND ZipCheck$ <= "15239") THEN Info$ = Clist$ PRINT LEFT$(Info$, 30) 'read name string PRINT MID$(Info$, 31, 30) 'read address string PRINT RIGHT$(Info$, 17) 'read city, state and zip code PRINT END IF GET #1 'simply GET reads each FIELD record after first LOOP CLOSE #1 END
|
|
dualbrain
Junior Member
The only bug free code is code that is no longer used.
Posts: 51
|
Post by dualbrain on Oct 10, 2022 15:19:04 GMT
True, but having the method be able to easily do the math for you does still have merit... if, of course, you are working with the same TYPE for all records. ;-)
|
|