<NIS.NSF.NET> [IMR] IMR87-09.TXT
 
 
 
Westine                                                         [Page 1]

 
 
 
~
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SEPTEMBER 1987
 
 
INTERNET MONTHLY REPORTS
------------------------
 
 
The purpose of these reports is to communicate to the Internet Research
Group the accomplishments, milestones reached, or problems discovered by
the participating organizations.
 
     This report is for research use only, and is not for public
     distribution.
 
Each organization is expected to submit a 1/2 page report on the first
business day of the month describing the previous month's activities.
These reports should be submitted via network mail to Ann Westine
(Westine@ISI.EDU) or Karen Roubicek (Roubicek@SH.CS.NET).
 
 
BBN LABORATORIES AND BBN COMMUNICATIONS CORPORATION
---------------------------------------------------
 
     ARPANET PSN 7 SOFTWARE UPGRADE
 
     During the month of October software in the ARPANET packet-
     switching nodes (PSNs) is being upgraded from Release 6 to Release
     7.  This is a major software upgrade that will, when completed,
     introduce the "New End-to-End Protocol" into the ARPANET.  The
     upgrade is being performed in phases.  Some service disruption can
     be expected, as nodes must be loaded with new software and
     restarted.  In addition, prior to the cutover of all ARPANET PSN's
     to the New End-to-End Protocol, groups of nodes will be switched to
     the New End-to-End Protocol for brief test periods.  During these
 
 
 
Westine                                                         [Page 1]

Internet Monthly Report                                  September 1987
 
 
     periods, the nodes testing the New End-to-End Protocol will be
     isolated from the remainder of the ARPANET.  Interested users
     should consult DDN Management Bulletin No. 34 dated 9 September
     1987 for further details.  Users with questions concerning the
     upgrade process may send electronic mail to ARPAUPGRADE@BBN.COM.
 
     ARPANET USER DATABASE CUTOVER
 
     During October, the User Database Host (UDH) will migrate from a
     TOPS-20 machine at ISI (ISI-C) to a MicroVAX-II at BBN.  ISI-C is
     being phased out at the end of October.  The transition to the new
     UDH is planned as to minimize the impact on the TAC users.  The UDH
     cutover can't be completely transparent, but with the cooperation
     of the users and administrators it can be accomplished in the time
     allotted.  The major impact, if any, will be felt mainly by users
     whose passwords expire during the transition period and by new
     users.
 
     The ARPANET TAC Access (TACACS) System is designed to limit access
     to the ARPANET through Terminal Access Controllers (TACs) to
     authorized users.  A user who dials into a TAC port through the
     public switched telephone network is required to give a valid user
     ID and password before the TAC will permit the user to open a
     connection to an ARPANET host.  The system currently running on the
     ARPANET is composed of a User Database Host (UDH), three Login
     Hosts and a monitoring host.  Software for the current User
     Database Host runs on a TOPS-20 system located at ISI.  A new UDH
     system, running on a MicroVAX-II using the INGRES database system
     has been developed by BBN and is ready to be put into service on
     the ARPANET.  The new UDH has been running successfully at BBN for
     over a year.
 
     The current plan is to transition to the new UDH on 10/26/87.  We
     will be sending more details on the cutover to TACACS users in the
     next week.
 
     WIDEBAND NETWORK
 
     The implementation of a "round-robin" control slot assignment
     scheme in the BSAT software has been completed and is undergoing
     testing.  Control slots are fixed size subdivisions of satellite
     channel time.  They are primarily used by the BSATs to broadcast
     reservation requests for the satellite channel capacity required to
     transmit user data.  As such, the control slots represent a
     component of the overhead introduced on the satellite channel by
     the PODA protocol.  There is currently one control slot assigned to
     each BSAT in each PODA frame.  The new round-robin scheme will
     allow the control slots to be assigned such that each frame will
     contain fewer slots than the total number of BSATs; this will give
     each BSAT one control slot assignment in every N frames, where N is
     a network parameter that can be modified by the network manager.
 
 
 
Westine                                                         [Page 2]

Internet Monthly Report                                  September 1987
 
 
     The flexibility inherent in this scheme should allow better
     management of network-wide delay/throughput tradeoffs as a function
     of the network users' needs.
 
     A three-site video conference connection using Wideband Network
     packet broadcast facilities was achieved for the first time on
     September 29.  The three sites involved in the connection were ISI,
     BBN, and a BBN test site (the latter sharing earth station
     facilities with the operational BBN site).  The BBN and ISI sites
     used real video compression hardware.  Since real video compression
     hardware was not available for the BBN test site, that site
     operated in an echo mode, returning all of the video data received
     from one of the other two sites.  Conference participants at each
     site were able to view a video display which was partitioned to
     provide the video images received from the two other sites.
 
     SATNET
 
     The SATNET remained healthy during the month of September.  From
     tests run by ISI, the SIMPs showed an average availability of 99.5%
     out of 99.5% for the month.  We are continuing to see gateway
     resets at the SATNET sites and are waiting for the next software
     release.
 
     In the middle of the month, however, the Tanum site was reporting a
     large number of packets with errors on channel 0.  No other sites
     were affected.  The problem was isolated to the receive side of the
     modem.  The modem from channel 1, which was not being used, was
     moved to channel 0.  The problem disappeared.  From spare hardware
     in the PSP terminal, channel 1 was also brought up.  One of the
     modules used to bring channel 1 up, however, was suspected of
     causing a past problem.  We are watching it closely for any bad
     behavior.
 
     INTERNET RESEARCH & DEVELOPMENT
 
     We are about half way through fielding Release 3.10 in all of the
     Butterfly Gateway sites.  So far it looks very solid.  We should
     have it in all of the sites in a week or two.
 
     We are also about to turn on the fragmention of EGP updates in the
     LSI-11 and Butterfly Gateways to support the growth in the number
     of networks.  At last count we saw 299 networks.  This will result
     in the EGP updates being fragmentated as soon as it is enabled in
     the gateways.  If you have a gateway which runs EGP with the core
     system (Arpanet and Milnet) and does not do IP reassembly, please
     contact me (hinden@bbn.com) as soon as possible.
 
     Bob Hinden  (hinden@PARK-STREET.BBN.COM)
 
 
 
 
 
Westine                                                         [Page 3]

Internet Monthly Report                                  September 1987
 
 
ISI
---
 
     Internet Concepts Project
 
          Greg Finn continues researching recent results on the
          Byzantine generals problem to see if they can be applied
          toward attack resistant routing.
 
          Three RFCs were published.
 
          RFC 1019:  Arnon, Dennis "Report of the Workshop on
                     Environments for Computational Mathematics",
                     Sep., 1987.
 
          RFC 1025:  Postel, J. "TCP and IP Bake Off", Sep., 1987.
 
          RFC 1026:  Kille, S.E., "Addendum to RFC 987 (Mapping between
                     X.400 and RFC-822)", Sep., 1987.
 
          Ann Westine (Westine@ISI.EDU)
 
 
     Multimedia Conferencing Project
 
          This month we established our first three-site video
          conference connection, using an interim implementation of the
          ST conference protocol in the Packet Video Program and Voice
          Funnel, and multicast group addressing in the Wideband Network
          BSATs.  The three sites were ISI, BBN and Ft. Knox (the Ft.
          Knox site is physically located at BBN, but shares only the
          analog portion of the BBN earth station and is effectively the
          same as a separate site).  Each site produced a video data
          stream of 230Kb/s which was delivered through the multicast
          mechanism to the other two sites.
 
          Since we have only two real video compression systems, the Ft.
          Knox copy of the Packet Video Program was put into a data-echo
          mode to return the data stream of the first participant
          connected to it.  At the ISI and BBN sites, the display on the
          video monitor was split into top and bottom halves to show the
          other two sites.  For example, if ISI connected to Ft. Knox
          first and then BBN connected to ISI, the display at ISI would
          show an echo-delayed (via Ft. Knox) ISI image on top, with a
          direct BBN image on the bottom.  At BBN, the direct ISI image
          would be shown on top with an echo-delayed ISI image on the
          bottom -- an interesting effect!  The roles were reversed by
          re-establishing the connection with the BBN to Ft. Knox link
          first.
 
 
 
 
 
Westine                                                         [Page 4]

Internet Monthly Report                                  September 1987
 
 
          In the future we will replace the two prototype video
          compression systems with commercial hardware so the number of
          real sites can be expanded.
 
          Steve Casner (Casner@ISI.EDU)
 
          Brian Hung has been able to successfully incorporate TFTP into
          his Pascal program. This has allowed him to write a
          conferencing mode of his IBM-PC scanner program as described
          in the last monthly report. This program has been written and
          is undergoing tests.
 
          Brian Hung (Hung@ISI.EDU)
 
 
     NSFNET Project
 
          Annette DeSchon continued work on the Background File Transfer
          Program (BFTP).  The major accomplishment was to make a
          version of BFTP which can be invoked with a Telnet connection
          to a well-known port, without login. The intent is to provide
          an FTP executive agent that can be accessed from anywhere, to
          perform third-party transfers between any pair of hosts.
 
          Specifically, Annette developed a server module "bftpd", which
          runs under "inetd", the standard BSD4.x well-known port
          dispatcher, and in turn transfers to the BFTP user interface
          module "bftp".  When a SYN arrives for the BFTP well-known
          port (152), "bftpd" opens the TCP connection and performs
          Telnet negotiations. When it receives control, the user
          interface "bftp" allows the user to enter file transfer
          requests using a TOPS20-style command parser.  The major
          modules of BFTP -- "bftp", "bftpd", and "fts" -- have all
          executed on both Sun workstations under 4.2BSD and a VAX
          running 4.3BSD.
 
          Bob Braden continued work on the statistics collection
          programs, "statspy" and "collect", described in last month's
          report.
 
          Bob Braden & Annette DeSchon (Braden@ISI.EDU, DeSchon@ISI.EDU)
 
 
     Supercomputer and Workstation Communication Project
 
          Alan Katz continued to develop some X-windows based tools
          using the Xt toolkit.  Alan has also been writing a few lisp
          functions for GNU Emacs which also has an X-windows interface.
 
 
 
 
 
 
Westine                                                         [Page 5]

Internet Monthly Report                                  September 1987
 
 
          Alan put together an EQN previewer which can be run from
          either GNU Emacs or at an Xterm window and will preview in an
          X-Window a selected block of EQN (the equations preprocessor
          for troff).
 
          Alan Katz (Katz@ISI.EDU)
 
 
MIT-LCS
-------
 
     No report received.
 
 
MITRE Corporation
-----------------
 
     The objective of the MITRE Internet Engineering program is twofold:
     1)  to address internet level performance issues for the DoD
     internet and 2) to address the interoperability between the DoD and
     OSI protocol suites as support for a planned transition from DoD to
     OSI protocols.  During August and September, MITRE finalized the
     following documents:
 
     1.  Landmark Hierarchy, Description and Analysis.
     2.  Landmark Routing, Architecture, Algorithms, and Issues.
     3.  Assured Destination Binding.
     4.  A Model of Interadministration Network User Authentication and
         Access Control.
     5.  A Communications Protocol for User Authentication and
         Access Control.
     6.  Internetwork Performance Measurements and Analysis.
     7.  An Evaluation of Congestion Control Techniques for the
         Internet, Vol. 1:  Gateway Topology.
     8.  An FTP/FTAM Application Bridge.
     9.  Name Domain in MILNET:  Issues and Solutions.
     10. DoD/ISO Implementation Plan.
     11. DoD Protocol Engineering Plan.
 
     Ann Whitaker (Whitaker@Gateway.Mitre.Org)
 
 
NTA & NDRE
----------
 
     No report received.
 
 
 
 
 
 
 
 
Westine                                                         [Page 6]

Internet Monthly Report                                  September 1987
 
 
SRI
---
 
     Internet Research
 
       Jose Garcia-Luna organized the technical program of the ACM
       SIGCOMM '87 workshop on frontiers in computer communications
       technology, held at Stowe, Vermont on August 11-13, 1987.  The
       workshop was sponsored by ACM with support given by the
       Information Sciences and Technology Center of SRI International.
       The proceedings of the workshop will appear as a special issue
       of the ACM Computer Communications Review.
 
       Jose Garcia-Luna
 
     Multimedia Conferencing/Wideband Network
 
       The Experimental Multimedia Conferencing Environment (EMCE)
       was installed in the CECOM WB Satnet facilities at Ft. Monmouth.
       An initial connectivity and routing check test suite was first
       used to establish an appropriate connection between Ft. Monmouth
       and SRI.  Then, EMCE was used at site sites to communicate and
       discuss a simulated C2 joint operations scenerio.  Digital voice,
       normalized pointer, and pictoral (graphic or image) display
       commands were exchanged using the datagram transport mode.
       End-to-end delays of one second were measured.  This
       demonstration is in support of the Joint Directors Laboratories
       program showning technologies that can interoperate between
       tri-service sites.  This is the first time this software has
       run at a service (Army) WB Satnet site.  Future plans are to
       have similar demonstrations involving RADC (Air Force) and
       NOSC (Navy).
 
       Earl Craighill
 
     Zaw-Sing Su <zsu@tsca.istc.sri.com>
 
 
UCL
---
 
     As announced by Steve Kille, the FTAM service at UCL is available
     over the internet. We are also running over a **full** connection
     based OSI stack inside the UK.
 
     The UCL Network Service System is now totally separate from the UCL
     CS Networks. The advantages of this for access control to the
     internet are great, and the management of relay services is now
     greatly simpilified.
 
 
 
 
 
Westine                                                         [Page 7]

Internet Monthly Report                                  September 1987
 
 
     One consequence of the changes is that the SAM (SATNET Access
     Machine) is no longer required for routing and access control, and
     has (sadly) been de-commissioned. We note that it's availability
     was rarely below 100%, and that its mean time between failure was
     of order weeks!
 
     We are currently expanding the UCL swamp with MAC Bridges (from a
     UK company called Logic Replacement Technology). These devices
     allow interconnection of Ethernets directly, or separated by up to
     a kilometre using, multiple Megastream links (n*2.048 Mbps).
 
     As a consequence of this, we are working on the problems of loop
     automatic load sharing, link failure resilience, and loop
     resolution for MAC Bridges.
 
     John Crowcroft  (jon@CS.UCL.AC.UK)
 
 
UNIVERSITY OF DELAWARE
----------------------
 
     1.   A massive disruption of the NSFNET community began on 29
          September and continues into October. After several days of
          intensive investigation, it appears that the primary cause is
          a shortage of virtual-circuit resources in the ACC 5250 X.25
          interface of the main NSFNET gateway at the Pittsburg
          Supercomputing Center. Secondary causes include incorrect
          configurations of some alternate gateways, exceptionally noisy
          lines on the alternate links and instability of the EGP
          gateway data base observed in the core gateways and clients.
          The result is apparent fragmentation of the ARPANET with
          respect to the gateway and much lost traffic. The cure for the
          interface problem may be a redesigned software driver that
          manages virtual circuits more effectively; however, in the
          long run this problem can be expected to impact other gateways
          and PSNs as well. A series of reports on these findings was
          distributed to the tcp-ip list.
 
     2.   Investigation and experiment on Internet congestion control
          continues. An analysis of the NSFNET Backbone logs showed the
          new selective-preemption and timeout policies installed in
          June are working very well. As the result of this experience,
          a suite of congestion-control fairness policies was developed
          and implemented in the fuzzball operating system used in the
          NSFNET Backbone network and elsewhere. The policies, described
          in memos distributed to the INENG Task Force, are designed to
          promote equal access to buffer resources and equalize delay
          dispersions on the output queues.  The current fuzzball
          implementation attempts to adjust the generation rates of
          individual hosts using the ICMP Source Quench message in a
          novel way to achieve this goal.
 
 
 
Westine                                                         [Page 8]

Internet Monthly Report                                  September 1987
 
 
     3.   The TCP implementation used in the NSFNET Backbone and
          elsewhere was adapted to use the new ICMP Source Quench
          message described above. The flow-control mechanism was
          rebuilt using a unified approach for retransmission, source
          quench, Van Jacobsen's "slow start" and John Nagle's famous
          algorithm.
 
     4.   The new fuzzware features were throughly tested and the
          results distributed to the NSFNET and INENG engineering
          groups. The most dramatic tests took place during the NSFNET
          disruption mentioned above, when massive traffic spasms
          occured at the MACOMNET and UMDNET fuzzballs, which serve as
          backups for the main NSFNET gateway. It was clear from these
          tests that the new ICMP Source Quench message can be effective
          in controlling medium-term traffic surges lasting some ten or
          more times the mean roundtrip delay.
 
     5.   As the result of observing the various fuzzballs and fuzzball
          networks scattered over the globe, the statistics collection
          and event monitoring systems were overhauled. A giant new
          fuzzware version incorporting these and the other features
          mentioned above is now in distribution to all known fuzzball
          swamps in the US, Hawaii, Norway, Germany and Italy.
 
     6.   Development continues on the Dissimilar Gateway Protocol
          (DGP). Woody Woodburn and Sean Brady of Linkabit have a pre-
          prototype simulation program running on a three Sunstation
          testbed.
 
     7.   Mike Minnich completed an investigation of the performance of
          the BBN LSI-11 core-gateway system over the last two years and
          found some highly interesting things, one of which is that the
          mean length of end-end paths is almost three core-gateway
          hops. Since most of these gateways are huddled on ARPANET and
          MILNET, this suggests either a growing level of LAN-LAN
          traffic via the overloaded ARPANET/MILNET gateways, increasing
          impact of the infamous EGP extra-hop problem or both.
          Unfortunately, I toss my hat in the latter ring.
 
     Dave Mills  (Mills@UDEL.EDU)
 
 
NSF NETWORKING
--------------
 
     UCAR/BBN LABS NSF NETWORK SERVICE CENTER (NNSC)
 
     The NNSC has been maintaining an on-line calendar of events of
     interest to the Internet community for several months.  We want to
     encourage people to use the service and to contact us
     <nnsc@nnsc.nsf.net> about meetings and conferences that we should
 
 
 
Westine                                                         [Page 9]

Internet Monthly Report                                  September 1987
 
 
     include.
 
     Calendar entries are available by anonymous FTP from sh.cs.net
     (calendar/<mon> where <mon> is the three letter abbreviation for
     the month) or by sending a mail message to the CSNET Info-Server
     (info@sh.cs.net) For information on how to use the Info Server,
     send a message to <info-server@sh.cs.net> with the following text
     in the body of your message:
 
             Request: calendar
             Topic: help
 
     Bill Curtis and Stan Ruttenberg from UCAR and Karen Roubicek
     represented the NNSC at the meeting of the Federation of Regional
     Networks in Rochester.
 
     On October 30 at 5:00 pm EST, the NNSC hotline number will change
     to
                       (617)873-3400.
 
     Because BBN Labs is changing its phone exchange from 497 to 873,
     the first three digits of the numbers for the NNSC staff will also
     become 873.
 
     By Karen Roubicek (roubicek@nnsc.nsf.net)
 
     NSFNET BACKBONE SITES
 
     CORNELL UNIVERSITY THEORY CENTER
 
     September's statistics are not yet available.  Due to cable swaps,
     the mapping between circuits and interfaces changed at least once
     during the month, so the data needs cleaning up.
 
     The new release of gatedaemon software has been picked up by over
     100 sites since its availability was announced last week.
 
     By Scott Brim (swb@tcgould.tn.cornell.edu)
 
     UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
 
     Midnets lines are installed and connected to a P4200 at Illinois.
     The lines are usable and we are currently waiting for approval of
     our gated configuration to begin announcing them to NSFNET.  Expect
     this to happen within the by 15 Oct.
 
     Rerouting of the NCAR-UIUC link directly to Urbana (bypassing an
     outrageously noisy T1-broadband cable hop from Chicago) has been
     ordered and a delivery date of the line is 15 Oct.
 
     A P4200 connection to the newly forming Minnesota State Network has
 
 
 
Westine                                                        [Page 10]

Internet Monthly Report                                  September 1987
 
 
     been ordered with a completion date in October is also underway.
     Thomas Jacobson (thomas@umn-rei-uc.arpa) is the contact for MN
     network.
 
     We are in the middle of the test window for our ARPAnet connection
     to Madison.  Delivery date according to the TSR is 19 Oct.  We are
     waiting for the 24 hour in advance warning call from BBN.
 
     By Ed Krol (krol@uxc.cso.uiuc.edu)
 
     JOHN VON NEUMANN NATIONAL SUPERCOMPUTER CENTER
 
     This report is designed to inform the JVNC Consortium and JVNCNET
     network members as well as the Internet community of monthly status
     of the JVNCNET network.
 
     The data used on this report is collected using a number of
     techniques developed at JVNC, together with data from the JVNC
     operations group.  The NSFNET data is obtained from the NSFNET NOC
     (Network Operations Center).
 
     NETWORK BRIEF:
 
     JVNCNET has the configuration of a "tree".  The 13 Consortium
     Universities*, together with the 3 NRAC** schools form the 16 node
     network.  The gateways are 15 VAX's and 6 Ungermann Bass (UB)
     routers, connected by 9 T1 lines, 5 56kbps lines and 2 satellite
     links.  Locally at JVNC (center of the JVNCNET network), JVNCA (a
     VAX8600 running ULTRIX) serves as major router, primary name server
     for domain "csc.org" and primary network monitoring system, JVNCB
     (a VAX750, ULTRIX)is also a gateway, finally we utilize a dedicated
     UB router to connect to the NRAC group.
 
     NETWORK STATISTICS DATA:
 
     The network monitoring statistics are performed from "jvnca", and
     are affected by its down time.  In order to compensate for this we
     multiply (in our analysis below), the numbers with the percentage
     of down time of jvnca thus representing the worst case.  We plan to
     overcome this situation with the next release of the JVNC
     monitoring package (see "Network Monitoring Project" below).
 
     The network is polled every 10 minutes with "icmp-echo" packets,
     and the analysis of the data is performed on this data.  The data
     is processed using information on the "scheduled down time table"
     for the systems.  Therefore, if a gateway is scheduled to be down,
     its information is not computed during that time, whether the
     gateway is operational or not.
 
     The special configuration of this network together with the fact
     that we use different subnets for the point to point links allows
 
 
 
Westine                                                        [Page 11]

Internet Monthly Report                                  September 1987
 
 
     us to determine very accurately the reachability for the gateways.
     When we receive an alarm that one node/gateway is unreachable, we
     proceed to determine whether the node is down or any other
     component of the the link is non functional.  This information is
     not affected much by the routing.
 
TABLE I, Gateways Reachability/Link Status
--------------------------------------------------------------------------
date of report: September 30th.
nde           meanttr   maxttr   meantbf   sched  %Up   %Dwn  avail  perf
ias               900    1202    583306     0      99     0     99    99
mit              3399    8999    773742     0      99     0     99    99
nyu              1800    4200    777142     0      99     0     99    99
njit            51406   85779    182276     0      78    21     78    83
brown            3599    9599    580607     0      99     0     99    99
jvncb             601     601   2321825     0      99     0     99    99
jvncc            2747    7801    119485     0      97     2     97    98
jvncd            3719   11399    151108     0      96     2     97    98
jvncf            1049    1800    282510     2      96     0     99    99
umdnj           37789   85779    141966     0      78    21     78    84
arizona          3599   12000    580606     0      99     0     99    99
harvard          8397   34200    147390     0      94     5     94    96
rutgers           749     600    583456     0      99     0     99    99
stevens         48425   85779    107362     0      68    31     68    77
colorado        29435   85777    137480     0      82    17     82    81
columbia        27345   85777     66127     0      70    29     70    78
princeton         599     599   2336227     0      99     0     99    99
rochester        4365   22789    125291     0      96     3     96    97
u-of-penn         599     599   2336227     0      99     0     99    99
penn_state       1950    6000    582256     0      99     0     99    99
Total test time (min):  38947
--------------------------------------------------------------------------
 
     The Header definitions of table I are:
 
     Nde:    the gateway or host on JVNC-NET network (128.121)
 
     Meanttr:in seconds, the mean time to recover from the "down" state
             to the "up" state.  Where "down" state is when the result
             of sending "icmp-echo" packets is no packets return, and
             the "up" state is when we receive packets back.  Each test
             is performed 10 times, every 10 minutes, and averaged each
             time.
 
     Maxttr: in seconds, the maximum time to recover, from the "down"
             state to the "up" state (see above).
 
     Meantbf:in seconds, the mean time between failures.
 
     %Up:    percentage up time considering the scheduled down time as
             up time.
 
 
 
Westine                                                        [Page 12]

Internet Monthly Report                                  September 1987
 
 
     %Down:  complement of "%Up" but truncating.
 
     Sched:  in percent, is the time the gateways/hosts were "scheduled"
             to be down, respect to the total time of test.  This number
             is truncated.
 
     Avail:  in percent, is the time for which the gateways were
             available ("up" state) respect to the total time of test
             (minus the scheduled down time).
 
     Perform:in percent, is the figure of merit that considers the
             number of packets lost and the available time.
 
     JVNCA was up 97.84% of the time giving a worst case average for all
     gateways of 90.46% of uptime.
 
     ANALYSIS:
 
     The data of Table I is a product of the gateways availability
     together with lines/satellite status (this last including the
     communications equipment such as T1 muxes, CSU/DSU, etc).  The
     following is broken down in gateways and links (lines and
     satellite).
 
     1.- Gateways Availability:
         ---------------------
 
 
             Table II, Gateways Availability
     ---------------------------------------------------------------
     gateway         node            problems
     ---------------------------------------------------------------
     jvnca           JVNC            power supply, power failure
     jvncb           JVNC            power failure
     jvnc-njit       JVNC            power failure
     rochway         ROCHESTER       hardware
     njit-jvnc       NJIT            power supply
     super-fs        RUTGERS         sofware hanged
     iasvax          IAS             power failure
     hucsc           HARVARD         machine crashed
     egress          NYU             machine crashed
     pusuper         PRINCETON       software hanged
     colo            COLORADO        bad memory board
     stevens-njit    STEVENS         power failure
     ---------------------------------------------------------------
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 13]

Internet Monthly Report                                  September 1987
 
 
     2.- Lines Status:
         ------------
 
                     Table III, Line Status
     ---------------------------------------------------------------
     from    to              type      problem
     ---------------------------------------------------------------
     NJIT    STEVENS         56Kbps    good in only one direction
     NJIT    JVNC            56Kbps    bad connector
     JVNC    RUTGERS         T1        preventive maintanence, bad
                                       board on T1 mux.
     JVNC    COLUMBIA        56Kbps    flaky line
     JVNC    MIT             T1        cut of major trunk of AT&T
     JVNC    NJIT            56Kbps    cut of major trunk of AT&T
     JVNC    ROCHESTER       56Kbps    cut of major trunk of AT&T
     MIT     HARVARD         T1        no sync, New England Bell
     ---------------------------------------------------------------
 
     3.- Satellite Nodes Status:
         ----------------------
 
 
                     Table IV, Satellite nodes Status
     --------------------------------------------------------------
     from            to              type            problem
     --------------------------------------------------------------
     JVNC            COLORADO        satellite       storm
     JVNC            ARIZONA         satellite       storm
     JVNC            COLORADO        satellite       TransLAN
     --------------------------------------------------------------
 
     4.- Traffic:
 
     The traffic that transits JVNCNET is a combination of JVNC
     supercomputer traffic and traffic "in transit".  The JVNC
     supercomputer traffic is the one that has the JVNC front ends
     (jvncc, jvncd or jvncf) as source/ destination, while the traffic
     in transit is the rest.  The total traffic is:
 
        T = traffic on subnet 50 (T1) + traffic on subnet 51 (T2) +
        + traffic between the MIT node and the Harvard node (T3) * +
        + traffic between the MIT node and the Brown node (T4) * +
        + traffic between the NJIT node and the Stevens node (T5)* +
        + traffic between the NJIT node and the UMDNJ node (T6) *
 
     where:
 
      T1 = (jvnca + jvncb + jvncc + jvnc-njit + colo + jvax + fuzzball +
                    + term-serv)|subnet 50
      T2 = (jvnca + jvncd + jvncf + picasso + monet + iris + term-serv1 +
                    + term-serv2 + term-serv3)|subnet 51
 
 
 
Westine                                                        [Page 14]

Internet Monthly Report                                  September 1987
 
 
      T3 = |coventry|subnet_3 - coventry|subnet_9|
      T4 = |coventry|subnet_3 - coventry|subnet_10|
      T5 = |njit-jvnc - njit-stevens|
      T6 = |njit-jvnc - njit-umdnj|
 
             * traffic not seen on subnet 50
 
     number of packets on subnet 50 (one direction, T1) > 140,270,472
                                                          ===========
     NSFNET/total on JVNCNET =~ 25%
                                ====
     JVNC Supercomputer traffic / total on JVNCNET =~ 20%
                                                     =====
 
     where supercomputer traffic is the one on subnet 51 (ethernet) plus
     the traffic in jvncc.
 
     The data is collected from jvnca, jvncb and JVNC_fuzzball every 10
     minutes automatically and compiled each month.  No data from the
     jvnc-njit router and Colo is available this month.
 
                Table V, Traffic on JVNCNET (subnet 50 only)
--------------------------------------------------------------------------
local gateway          packets in   packets out   comments
--------------------------------------------------------------------------
jvnca (subnet 50)      59,171,739    46,133,254   accurate
jvnca (subnet 51)      16,208,591    14,086,199   accurate
jvncb (subnet 50)      39,930,174    33,440,167   accurate
jvncc (subnet 50)      14,119,720    12,061,632   accurate
jvncd (subnet 51)       5,277,717     3,331,256   accurate
jvncf (subnet 51)                                 not available
jvnc-njit (subnet 50)                             not available
colo (subnet 50)                                  not available
jvax (subnet 50)        2,308,400     1,560,145   not complete (low est.)
jvnc.nsf.net           24,740,439    35,289,789   fuzzball, includes icmp
-------------------------------------------------------------------------
 
     5.- PSN Status:
 
     Again we have been delayed in our connection, which was scheduled
     for the week of the 27th of September.  No further notice of when
     we will be connected.
 
 
 
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 15]

Internet Monthly Report                                  September 1987
 
 
     6.- Routing:
 
     UB routers: new version of the software has been running for a
                 month now, there is a problem with the size of the
                 updates, it will be corrected with the new release
                 (late next week).
 
     VAXs:       new version of gated has been installed on jvnca and
                 jvncb, it seems to be working fine, and it has
                 corrected the kill -2 problem of the last release.
 
     7.- JVNCNET main ethernet traffic analysis:
 
     The overall traffic is high but within the capacity of the
     ethernet.  The broadcast traffic is less than 5% of the total
     traffic, and the multicast traffic is negligable respect to the
     total traffic.  The number of CRC errors is small respect to the
     overall traffic and so is the number of alignment errors.  The
     distribution of traffic reflects that a significant amount of the
     traffic is whithin 1-81 bytes/packets (53%), another significant
     segment is between 82-145 bytes/packet (36%), well whithin the
     maximum buffer size of all interfaces of the JVNCNET network.
 
     8.- Comments:
 
     It is interesting to note that only 20% of the total traffic that
     arrives to our center goes to our supercomputer front ends, the
     rest (80%) is in transit.  Also note that the traffic has increased
     from the previous month for at least 14%.
 
     The network monitoring program helped us to detect a problem with
     the T1 mux on the line to Rutgers. Down time was scheduled for
     preventive maintanence on that line and after running a BERT test
     we found the problem to be on one card of the mux.
 
     PROJECTS:
 
     1.- Network Monitoring:
 
     Status: On going
 
     The "netsc" monitoring package is up and running in JVNCNET and in
     one network member of JVNCNET in beta test.
 
     The next step is to provide backup monitoring capabilities to netsc
     so we can keep the network monitored even when our primary
     monitoring server is down.
 
 
 
 
 
 
 
Westine                                                        [Page 16]

Internet Monthly Report                                  September 1987
 
 
     2.- Network Characterization:
 
     Status: On going
 
     The Network Characterization program is directed towards
     determining the parameters that characterize the JVNCNET network's
     diverse type of services.  This effort will be utilized not only as
     a research subject but to find/predict network bottlenecks and
     problems before they are obvious to the end users.  This task
     started two weeks ago with the collection of data and the
     automation of the collection, and will continue with the study of
     the characteristics and patterns that distinguish each point to
     point lines whether T1, 56kpbs or satellite.  The results will be
     available to the community.
 
     3.- Traffic Analysis:
 
     Status: On going
 
     The traffic data is currently being collected from JVNCA, JVNCB,
     and the fuzzball automatically.  This will be extended to the other
     VAXs on the same ethernet and the other routers on the network.  At
     the same time the data will be compiled per campus.
 
     INFORMATION:
 
     JVNCNET NOC:            "net@jvnca.csc.org"
     (JVNCNET Network Operations Center)
 
     JVNCNET manager:        "heker@jvnca.csc.org"
 
     * CSC Consortium: Princeton University, MIT, Harvard, Brown,
     University of Pennsylvania, Rutgers, IAS, Columbia, University of
     Rochester, NYU, Penn State, University of Arizona, University of
     Colorado.
 
     ** NRAC (Newark Remote Access): New Jersey Institute of Technology
     (NJIT), Stevens Institute of Technology, University of Medicine and
     Dentistry of New Jersey (UMDNJ).
 
     By Sergio Heker (heker@jvnca.csc.org)
 
     NATIONAL CENTER FOR ATMOSPHERIC RESEARCH AND UNIVERSITY SATELLITE
     NETWORK PROJECT
 
     No report received.
 
     PITTSBURGH SUPERCOMPUTING CENTER
 
     In September our network connections did not behave as well as they
     had for the past few months.
 
 
 
Westine                                                        [Page 17]

Internet Monthly Report                                  September 1987
 
 
     PSC-gateway performed very well until the 25th of the month.  The
     CMU IMP, (PSN #14) which we are connected to, was upgraded to PSN
     7.0 software on Thursday, the 24th.  Soon afterward we began to
     experience congestion and connectivity problems to other ARPANET
     sites.  The situation worsened throughout the weekend until the
     gateway began crashing and we needed to disable its connection to
     the IMP.  On Tuesday, the 29th, the PSN software was returned to
     version 6.0 and the problems mostly cleared up.  We are still
     observing lack of connectivity to some ARPANET sites.  These
     lingering problems are being investigated.  During these few days
     our T1 connection to SURANET at the University of Maryland
     performed as a backup.  Until the 24th, PSC-gw was putting about
     1.25 million packets a day through its interfaces.
 
     The PSC Fuzzball again performed well, however we did have to reset
     it a few times during the month, solving problems we haven't seen
     much since the spring.  The fuzzball has remained as one of the
     busiest of the NSFNET backbone nodes, pushing about 680,000 total
     packets daily.
 
     On September 17 and 18 the satellite dish that will serve as an
     interim ARPANET link for our new IMP was installed.  We are waiting
     for the dish electronics to be installed to make it functional.
     The permanent leased land lines have been ordered.
 
     By David O'Leary (oleary@morgul.psc.edu)
 
     SAN DIEGO SUPERCOMPUTER CENTER
 
     Our PSN was not inconvenienced by the change to the 7.0 software.
     This was most likely due to its still awaiting paths to the outside
     world.  The VSAT link has been cancelled as the equipment was
     needed elsewhere by DCA.  The PSN's 4 trunk links are now all
     scheduled to be terrestrial and are coming "real soon now".
 
     Proteon has installed an old FCO in our p4200.  That plus 7.3b
     software has caused a tremendous improvement in its reliablity.
 
     The 56k link to the p4200 at UC Berkeley is operational and
     carrying traffic to UCB and other schools on BARRNET.  None of
     these are being advertised to NSFNET (we are advertising UCSD, UCSB
     and Salk Inst.).
 
     By Paul Love (loveep@sds.sdsc.edu)
 
     NSFNET REGIONAL AFFILIATED & CONSORTIUM NETWORKS
 
     BARRNET
 
     BARRNET continues to operate on a routine basis with only minor
     unscheduled service interruptions.  Nearly all interruptions are
 
 
 
Westine                                                        [Page 18]

Internet Monthly Report                                  September 1987
 
 
     the result of problems with T1 communications circuits - both
     leased line and private microwave - although two failures occurred
     in our Verilink CSU equipment this month.  A class of
     communications circuit problems has been identified that is
     particularly troublesome: a T1 circuit will experience a high error
     rate but the problem will correct itself before telephone company
     personnel can respond to a service call.  When the service
     technicians do finally respond, the T1 circuit must be taken out of
     service for testing.  This leads to additional service outage and
     all too often the results of the test procedures are "no problem
     found".  Thus the particular troubled site is left with an
     "unexplained" service outage, a circuit that tests out clean, and
     no way of recovering charges associated with lost circuit service
     because the common carrier can't confirm there has been any
     problems with its service.  A mechanism needs to be found which can
     document T1 circuit problems, which the common carrier will
     acknowledge as a valid test, which can be used as a basis for
     measuring the performance of a T1 circuit, and which can be used as
     the basis for recovering circuit costs from the common carrier.  It
     is highly desirable to be able to monitor errors on a T1 circuit
     without removing the circuit from service.  The Proteon high speed
     serial I/O boards should be capable of this function, but at
     present do not address this issue adequately.  An alternative is
     Extended Super Frame (ESF) CSU's, but few common carrier circuits
     support this functionality at the present time.  The problem
     remains unsolved.
 
     BARRNET enabled a new connection to the San Diego Supercomputer
     Center (SDSC) this month, based on a 56Kb terrestrial microwave
     circuit and Proteon p4200 gateway.  At present routing information
     for this connection is static and entered manually, with
     connectivity only between BARRNET and SDSC.  A technical
     subcommittee is planning to meet in the near future to discuss the
     implications to our RIP routing configuration of enabling access to
     the NSFNET backbone through this same 56Kb link.  Results of a one
     day "accidental" experiment involving NSFNET access through the
     link met with mixed success because of multiple routes to other
     networks polluting BARRNET and its member campuses.  Preliminary
     evaluation of the specifications for release 7.4 of the Proteon
     gateway software lead us to believe that 7.4 will adequately
     address many of the most important routing issues in our regional
     network.  In particular, it will allow us to selectively accept
     those routes we know to come from "trusted" sources and exclude
     other routes.  We are anxiously awaiting the release of this new
     software.
 
     A area that is also of concern to BARRNET that is not addressed by
     the new routing software, however, is the issue of fallback routing
     capability.  In particular, we have had several occasions where
     communications circuits degrade enough to be unusable for our
     users, but not bad enough for RIP to drop the route.  Another thing
 
 
 
Westine                                                        [Page 19]

Internet Monthly Report                                  September 1987
 
 
     that we've found is that often a T1 circuit will fail in one
     direction only.  If it's bad enough for RIP to drop the route, it
     will drop it in the wrong direction.  An implicit assumption with
     RIP is that the reception of e.g. southbound RIP packets is a
     useful measure of whether the line is usable for northbound
     traffic.  This assumption is good for Ethernets and bad for T1
     circuits.
 
     A general meeting among the BARRNET consortium membership was held
     this month to begin discussions about turning BARRNET into a self
     supporting network.  Issues that were discussed include i) central
     versus distributed network management, ii) policy for new
     connections with other sites, iii) a mechanism to fund continued
     operation of the T1 circuits comprising BARRNET after the 3 year
     NSF funding runs out, iv) identification of resources on BARRNET
     connected campuses that can be used as the focus of experiments on
     a high bandwidth/low delay network such as that provided by the
     BARRNET T1 circuits.  None of these issues were completely settled
     and these and others will be topics in our monthly administration
     meetings.
 
     By Thomas E. Ferrin, UCSF (tef@cgl.ucsf.edu)
 
     JVNCNET (Refer to JVNNSC backbone report)
 
     MERIT (No report received)
 
     MIDNET (No report received)
 
     NORTHWESTNET (No report received)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 20]

Internet Monthly Report                                  September 1987
 
 
     NYSERNET
 
     As of 1 October 1987, NYSERNET had the following topology:
 
                               Clarkson
                  Syracuse--+  |
                            |  |
          Rochester--------Cornell---------RPI---Albany
             |               |              |
          Buffalo...Fredonia |              |
             |               |              |
          Binghamton         |    +-------- | ------StonyBrook
             |               |    |         |
             |               |    |         |
          CUNY--NYTEL/NSMAC--Columbia======NYU-+
          |  |\     |        |    |        /|  |
          |  | \ NYTEL/GC    | NYNEX/S&T  / |  |
          |  |  \          BNL           /  |  |
          |  |   \                      /   |  |
          |  |    +-------------Rockefeller |  |
          |  |                              |  |
          |  +------------------------------+  |
          |                                    |
          +-------------POLY-------------------+
 
          ====    T1
          \ | /   56kbits
          ....    9.6kbits
 
     The Columbia to NYU link has now been upgraded to T1 with
     Columbia/Cornell, Cornell/RPI, and RPI/NYU the next links to be
     upgraded to complete a "ring".
 
     A second meeting of the NYSERNET Library Group was held in New York
     City where recommendations were made to open up local electronic
     card catalog and other local services to scholars.  (An example of
     this is by telneting to "infotrac.rpi.edu" one gains access to the
     RPI card catalog system).
 
     By Marty Schoffstall (schoff@nic.nyser.net)
 
     SDSCNET (Refer to SDSC backbone report)
 
 
 
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 21]

Internet Monthly Report                                  September 1987
 
 
     SESQUINET
 
     The complete initially proposed SesquiNet configuration has been
     operational now for two months.  The following campus networks are
     being served, and are advertised via EGP to the core:
 
             Baylor College of Medicine      128.249
             Houston Area Research Center    192.31.87
             Rice University                 128.42
             Texas A&M University            128.194
             Texas Southern University       192.31.101
             and the University of Houston   129.7
 
     Although the network is not yet heavily loaded, we have still
     experienced no gateway failures.  We have had some trouble with
     leased lines.
 
     We have now done some analysis of the reliability of the network,
     based on logs written by our monitoring program.  For each of the
     sites listed below, we note the number of (unscheduled) failures
     and the percent of availability (percentage of time the site was
     up, deducting scheduled down time).
 
             --------------------------------
             Site    Failures     Available
             --------------------------------
             BCM         0         100.0 %
             HARC        2          95.5 %
             RU          0         100.0 %
             TAMU        3          95.6 %
             TSU         1          99.5 %
             UH          0         100.0 %
             --------------------------------
 
     While some of the problems at TAMU and TSU were due to leased
     lines, most of the unscheduled downtime was due to machine room
     power problems.
 
     We continue to await the completion of the connection from
     NSFNET/NCAR to SesquiNet/Rice via fuzzballs at the two sites.
 
     By Guy Almes (almes@rice.edu)
 
     SURANET (No report received)
 
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 22]

Internet Monthly Report                                  September 1987
 
 
     WESTNET
 
 
     1.   We as yet have not received funds from NSF, although we expect
          funding imminently.
 
     2.   We are in the process of preparing the paperwork to order 5
          gateways for the 4 sites in New Mexico and the University of
          Wyoming, to be implemented as Phase II of Westnet. We are also
          ordering (funded by the state of Colorado) 5 cisco gateways to
          replace the DEC uVAX II's which will be used as domain name
          servers and for bulletin boards, etc.
 
     3.   Brigham Young University and the University of Utah have
          requested to participate in Westnet. Discussions are ensuing
          regarding the nature of the connection. This should be a cost
          effective means of service, as a 56 kbps line is already
          budgeted, and only two local lines need be funded.
 
     4.   We are planning a workshop for technical representatives of
          the Phase II sites, to be held Nov. 9 to 11, 1987, to cover
          Routing Issues and use of the cisco gateways. The workshop
          will be held at the University of Colorado.
 
     5.   David Wood attended a users group meeting at cisco, and
          delivered a one hour presentation on Westnet, which was very
          well received. A lively discussion occurred afterwards, during
          the general discussion period. It appears that cisco will be
          very responsive to the needs of users. Incidentally, HP has
          over 50 cisco gateways interconnecting over 4500 hosts.
 
          By Pat Burns (pburns%csugreen.bitnet@WISCVM.WISC.EDU)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 23]

Internet Monthly Report                                  September 1987
 
 
TASK FORCE REPORTS
------------------
 
 
     APPLICATIONS -- USER INTERFACE
 
          No report received.
 
 
     AUTONOMOUS NETWORKS
 
          No progress to report this month.
 
          Deborah Estrin (Estrin@OBERON.USC.EDU)
 
 
     END-TO-END SERVICES
 
          No progress to report this month.
 
          Bob Braden (Braden@ISI.EDU)
 
 
     INTERNET ARCHITECTURE
 
          The Internet Architecture Task Force (INARC) studies technical
          issues in the evolution of the Internet from its present
          architectural model to new models appropriate for very large,
          very fast internets of the future. It is organized as a
          recurring workshop where researchers, designers and
          implementors can discuss novel ideas and experiences without
          limitation to the architecture and engineering of the present
          Internet. The output of this effort represents advance
          planning for a next-generation internet, as well as fresh
          insights into the problems of the current one.
 
          The INARC is planning a two-day retreat/workshop for 17-18
          November at BBN to discuss a fresh start on advanced internet
          concepts and issues. The agenda for this meeting will be to
          explore architecture and engineering issues in the design of a
          next-generation internet system. The format will consist of
          invited presentations on selected topics followed by a general
          discussion on related issues. Written contributions of
          suitable format and content will be submitted for publication
          in the ACM Computer Communication Review.
 
          In order to have the most stimulating discussion possible, the
          INARC is expanding the list of invitees to include those
          researchers with agenda to plow, axe to grind, sword to wield
          or any other useful instrument for that matter. Contributors
          are invited to submit concise summaries of presentations of
 
 
 
Westine                                                        [Page 24]

Internet Monthly Report                                  September 1987
 
 
          from fifteen to forty minutes in electronic form to
          mills@udel.edu or in hardcopy form to
 
          Dr. David L. Mills Electrical Engineering Department
          University of Delaware Newark, DE 19716 (302) 451-8247
 
          Up to forty participants will be selected on the basis of
          quality, relevance and interest. Following is a list of
          possible areas and issues of interest to the community.
          Readers are invited to submit additions, deletions and
          amendments.
 
 
          1.   How should the next-generation internet be structured, as
               a network of internets, an internet of internets or both
               or neither? Do we need a hierarchy of internets? Can/must
               the present Internet become a component of this
               hierarchy?
 
          2.   What routing paradigms will be appropriate for the new
               internet? Will the use of thinly populated routing agents
               be preferred over pervasive routing data distribution?
               Can innovative object-oriented source routing mechanisms
               help in reducing the impact of huge, rapidly changing
               data bases?
 
          3.   Can we get a handle on the issues involved in policy-
               based routing? Can a set of standard route restrictions
               (socioecononic, technopolitic or bogonmetric) be
               developed at reasonable cost that fit an acceptable
               administrational framework (with help from the Autonomous
               Networks Task Force)? How can we rationalize these issues
               with network control and access-control issues?
 
          4.   How do we handle the expected profusion of routing data?
               Should it be hierarchical or flat? Should it be
               partitioned on the basis of use, service or
               administrative organization? Can it be made very dynamic,
               at least for some fraction of clients, to support mobile
               hosts? Can it be made very robust in the face of hackers,
               earthquakes and martians?
 
          5.   Should we make a new effort to erase intrinsic route-
               binding in the existing addressing mechanism of the
               Internet IP address and ISO NSAP address? Can we evolve
               extrinsic binding mechanisms that are fast enough, cheap
               enough and large enough to be useful on an internet
               basis?
 
          6.   Must constraints on the size and speed of the next-
               generation internet be imposed? What assumptions scale on
 
 
 
Westine                                                        [Page 25]

Internet Monthly Report                                  September 1987
 
 
               the delay, bandwidth and cost of the network components
               (networks and gateways) and what assumptions do not?
 
          7.   What kind of techniques will be necessary to accellerate
               reliable transport service from present speeds in the low
               megabit range to speeds in the FDDI range (low hundreds
               of megabits)? Can present checksum, window and backward-
               correction (ARQ) schemes be evolved for this service, or
               should we shift emphasis to forward-correction (FEC) and
               streaming schemes.
 
          8.   What will the internet switch architecture be like? Where
               will the performance bottlenecks likely be? What
               constraints on physical, link and network-layer protocols
               will be advisable in order to support the fastest speeds?
               Is it possible to build a range of switches running from
               low-cost, low-performance to high-cost, high-performance?
 
          9.   What form should a comprehensive congestion-control
               mechanism take? Should it be based on explicit or
               implicit resource binding? Should it be global in scope?
               Should it operate on flows, volumes or some other traffic
               characteristic?
 
          10.  Do we understand the technical issues involved with
               service-oriented routing, such as schedule-to-deadline,
               multiple access/multiple destination, delay/throughput
               reservation and resource binding? How can these issues be
               coupled with effective congestion-control mechanisms?
 
          11.  What will be the relative importance of delay-based
               versus flow-based service specifications to the client
               population? How will this affect the architecture and
               design? Can the design be made flexible enough to provide
               a range of services at acceptable cost? If so, can the
               internet operation setpoint be varied, automatically or
               manually, to adapt to different regimes quickly and with
               acceptable thrashing?
 
          12.  What should the next-generation internet header look
               like? Should it have a variable-length format or fixed-
               length format? How should options, fragmentation and
               lifetime be structured? Should source routing or
               encapsulation be an intrinsic or derived feature of the
               architecture?
 
 
 
 
 
 
 
 
 
Westine                                                        [Page 26]

Internet Monthly Report                                  September 1987
 
 
          13.  What advice can we give to other task forces on the
               impact of the next-generation internet in their areas of
               study? What research agenda, if any, should we propose to
               the various NSF, DARPA and other agencies?  What advice
               can we give these agencies on the importance, level of
               effort and probablity of success of the agenda to their
               current missions?
 
          Dave Mills (Mills@HUEY.UDEL.EDU)
 
 
     INTERNET ENGINEERING
 
          No report received.
 
 
     PRIVACY
 
          No reportable Privacy Task Force activity took place in
          September.
 
          John Linn (Linn@CCY.BBN.COM)
 
 
     ROBUSTNESS AND SURVIVABILITY
 
          No report received.
 
 
     SCIENTIFIC COMPUTING
 
          No report received.
 
 
     SECURITY
 
          No report received.
 
 
     TACTICAL INTERNET
 
          No report received.
 
 
     TESTING AND EVALUATION
 
          No report received.
 
 
 
 
 
 
 
Westine                                                        [Page 27]