DUPLICATE_TOKENLINE_ELIM ENTER THE FULL FILENAME.FILETYPE OF THE FILE THAT YOU WISH TO ELIMINATE DUPLICATE LINES IN THEN ENTER A CONTROL Z THE ORIGIONAL FILES WILL BE CHANGED FOR EXAMPLE IF YOU DESIRE TO ELIMINATE DUPLICATE LINES IN DUM.TXT ENTER DUM.TXT AND A CONTROL Z THIS COMMAND LINE WILL ELIMINATE DUPLICATE LINES IN A FILE THAT APPEAR AS A NAME LIST TYPE OF FILE OR THE LINES MAY CONTAIN ANY SET OF ASCII STRINGS. CAUTION - ONLY THE FIRST 127 CHARCTERS OF THE TOKEN, OR ASCII STRING ON EACH LINE WILL BE CONSIDERED BECAUSE OF SEARCH STRING LIMITATIONS IN TECO. ALSO THE SEARCH MATCH IS EXACT EVEN AS TO CASE SO IT IS SUGGESTED THAT THE USER RUN SQUEZE FIRST TO PACK UP THE TOKENS IN EACH LINE THEN RUN TRAN WITH UPPERCASE.TXT OR LOWERCASE.TXT TO CONVERT THE FILE TO ALL UPPER OR LOWER CASE TEXT. SEE THE HELP FILES TO FIND OUT HOW TO RUN THESE EXEC'S. FLASH - A NEW EXEC REPLACE_ASCII_STRING WILL CONVERT CASE EVEN FASTER. USE UPCASE.TXT OR LOCASE.TXT WITH IT.! NOTE - THIS LATEST VERSION RUNS SLOWER BUT WILL HANDLE INFINITE SIZED FILES, HOWEVER THE EXEC REQUIRES AT LEAST 4 OR 5 TIMES AS MUCH USER DISK SPACE AVAILABLE AS THE SIZE OF THE FILE TO BE OPERATED UPON!! OR THE USER MAY OPT TO SELECT THE OLDER FASTER VERSION THAT WILL HANDLE FILES IN SIZE UP TO ABOUT 20 BLOCKS. NOTE: - ANOTHER MORE USER INTENSIVE METHOD OF HANDLING LARGER FILES WITHOUT INCURING TREMENDOUS AMOUNTS OF CPU TIME IS TO SORT THE FILE USING SORT AND MERGE UTILITIES THEN TO BREAK UP THE LARGE USER FILE INTO ABOUT 20 BLOCK CHUNKS (MULTIPLE FILES) AND MERGE A DUMMY FILE. THEN EXTRACT THE FILE SEPERATOR: "*;C*;C*;C*;C*;C*;C*;C" "" AND EDIT THE ORIGIONAL FILE PLACING THE FILE SEPERATOR IN AS MANY PLACES-1 AS YOU DESIRE THE FILE TO BE SEPERATED INTO, ALSO PLACING IT AT THE END OF THE FILE THEN USE EXEC SPLIT AND A USER CREATED NAME LIST TO BREAK THE FILE INTO SMALLER FILES! SOUNDS COMPLICATED BUT IT REALLY ISN'T ) THEN RUN THE FAST VERSION ON EACH OF THE SMALLER SIZED FILES AND MERGE THEM BACK AGAIN. A FINAL PASS THROUGH THE SLOW VERSION WILL THEN ENSURE ALL DUPLICATES HAVE BEEN ELIMINATED. THE REASON I SUGGEST THIS IS THAT THE SLOW VERSION MAKES "n" PASSES THROUGH THE ORIGIONAL FILE. BECAUSE TECO 11 CANNOT CURRENTLY BACKUP THROUGH A FILE. SO IF THE USER IS WORKING ON A 30000 LINE FILE IT WOULD TAKE 30000 PASSES OF A 30000 LINE FILE!!!! IN THAT CASE THE METHOD I SUGGESTED WOULD SAVE ORDERS AND ORDERS OF MAGNITUDE OF CPU TIME!! CAUTION THE ORIGIONAL FILE WILL BE MODIFIED!!! ENTER THE FULL FILENAME.FILETYPE OF THE FILE THAT YOU WISH TO ELIMINATE DUPLICATE LINES IN THEN ENTER A CONTROL Z