Article

A protocol for Packet Network Intercommunication

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A protocol that supports the sharing of resources that exist in different packet switching networks is presented. The protocol provides for variation in individual network packet sizes, transmission failures, sequencing, flow control, end-to-end error checking, and the creation and destruction of logical process-to-process connections. Some implementation issues are considered, and problems such as internetwork routing, accounting, and timeouts are exposed.

No full-text available

To read the full-text of this research,
you can request a copy directly from the authors.

... The latter begun its march, in 1973, when the first big breakthrough was accomplished with the launch of the "International Research Project", that resulted in the development of the Internet Protocol (IP), allowing computers from different networks to communicate and share data [6]. To complement the IP and fill its gaps in data delivery, the Transmission Control Protocol (TCP) was also implemented, which along with the IP form the TCP/IP model stack [6]. ...
... The latter begun its march, in 1973, when the first big breakthrough was accomplished with the launch of the "International Research Project", that resulted in the development of the Internet Protocol (IP), allowing computers from different networks to communicate and share data [6]. To complement the IP and fill its gaps in data delivery, the Transmission Control Protocol (TCP) was also implemented, which along with the IP form the TCP/IP model stack [6]. TCP designates rules on how to reliably establish and maintain error-free network connections between applications running over the IP network. ...
... In fact, the number of sensory nodes varies among the WSNs in order to create diversified traffic load case scenarios (from low to high) at the respected fog devices and obtain a more objective assessment of their conforming capabilities. The complete setup of F.E.MO.S. is illustrated in Figure J Five (5) One (1) Two (2) Six (6) One (1) Three (3) Three (3) Three (3) Four (4) Eight (8) One (1) Five (5) Two (2) Two (2) Six (6) Four (4) Two (2) Table J.2 lists the average values of raw data processing time in milliseconds for the deployed remote server and fog devices. In particular, the values refer to the mean time elapsed for each device to process a raw data packet acquired from the sensor nodes, from the moment it receives the first bit of data to the moment it sends back the appropriate response. ...
... Veri aktarılmadan önce verinin paketlenme, taşınma, adreslenme ve çözümlenme süreçlerine ilişkin süreçleri kontrol eden veri iletim protokolüdür. Bilgisayarlar arası iletişimin kurallarının düzenlenmesini sağlayan bu protokol sayesinde bilgisayarlar ağ tir (Cerf ve Kahn, 1974). TCP/IP'ın yaygın olarak kullanılmaya başlanmasıyla internet kavramı 1983'te kabul görmeye başlamıştır (Jones, 2002: 13 (Nye, 1990: 156) ifadelerini kullanması büyük oranda bugünkü iletişim teknolojilerinin sonuçlarını anlatmaktadır. ...
... Samuel Huntington 1993 yılında yayınlanan "Medeniyetler Çatışması" 17 başlıklı makalesinde "yeni dünyadaki temel çatışma kaynağının öncelikle ideolojik veya üzerinde veri değiş tokuşu yapabilmekte, iletişim kurabilmektedir. Bu teknoloji 1974 yılında Vinton Cerf ve Robert Kahn tarafından geliştirilmiştir (Cerf ve Kahn, 1974). ...
Chapter
Full-text available
Yeni medya teknolojisi sayesinde enformasyon alış verişi ülke sınırlarını tanımadan dünya üzerinde serbestçe dolaşırken, bu dolaşım sürecinde enformasyon ve insan etkinliği neredeyse kusursuz bir şekilde gözlemlenip, ölçümlenebilirken, bu teknoloji sayesinde medya kullanıcıları tarafından paylaşılan mesajlarla küresel ortak değerler yaratılabilirken akıllara şu soru gelmektedir: Yeni medya inovasyonu tarihin sonu mudur? Üç bölümden oluşan araştırmanın birinci bölümünde enformasyon toplumu kavramı incelenirken emeğin, değerin ve aracın yeniden icat edilmesini sağlayan süreçler incelenecektir. Toplumsal düzenin ne olduğu, teknoloji sayesinde yeni bir dünya düzeninin nasıl kurulduğu araştırılacaktır. Bu bağlamda sanayi toplumundan enformasyon toplumuna geçişi gerçekleştiren dinamikler irdelenecektir. Araştırmanın ikinci bölümünde yeni medya kapsamında gerçekleşen inovasyon aktörler, politika ve kurumsal ihtiyaçlar kapsamında incelenecektir. Özellikle İkinci Dünya Savaşı sırasında ivme kazanan ve 1960’larda sıçrama yapan, 1980’lerde yeni medya olarak anılmaya başlanan 1)iletişim teknolojisi inovasyonlarının arakasında yatan temel ihtiyaçlar, 2) inovasyonları yapan paydaşların politik ve ekonomik ihtiyaçları ve 3) inovasyonlar kapsamında yapılan araştırmalar incelenecektir. Bu kapsamda Enigma, Berlin Blokajı, Sibernetik ve ARPANET araştırılacaktır. Böylece teknoloji, toplum ve birey yeniden icat edilirken sosyal değişimin kim tarafından, nasıl gerçekleştiği ve amacı anlaşılmaya çalışılacaktır. Üçüncü bölümde geliştirilen tekniklerin mülkiyetinin süreç içerisinde devlet politikaları ile nasıl korunduğu ve yenilikler yayılma evresine geçtiğinde neler yapıldığı incelenirken tüm bu süreçlerin politik, sosyal ve ekonomik yansımaları incelenecektir. Araştırmanın enformasyon toplumu ve yeni medya kavramlarının görünen tanımlarının ötesine geçerek özgürleşme süreci olarak sunulan enformasyon toplumunun ve aracı yeni medyanın yeni nesil bir kontrol ve denetim sistemi mi olduğu tartışılacaktır. Literatür taraması ile toplanacak araştırma verileri tarihselliklerine göre analiz edilecektir. 21. Yüzyılda yapılan tanımlarla sürecin oluşumundaki olgular kıyaslanarak enformasyon toplumu ve yeni medya kavramları için güncel bir değerlendirme yapılacaktır. Araştırma kapsamında Joseph Schumpeter’in inovasyon ve yaratıcı yıkım kavramlarına, Edibe Sözen’in, Peter L. Berger ve Thomas Luckmann’ın toplumsal modellere ilişkin görüşlerine, Alan Turing, Norbert Wiener, Claude Shannon, Warren Weaver, Murray Turoff, Douglas Engelbart, JCR Licklider, Paul Baran gibi iletişim teknolojilerinin gelişmesinde önemli katkısı olan araştırmacıların çalışmalarına, George Stein, Ronald E. Rice, Jacques Valle, Howard Rheingold gibi yeni medya araştırmacılarının görüşlerine, NFS, United Nations Conference on Trade and Development (UNCTAD), U.S. Congress Office of Technology Assessment , United States Congress House Judiciary Committee ve benzeri raporlara danışılacaktır.
... Διαφορετικά θα μπορούσε κανείς να πει ότι δύο κόμβοι βρίσκονται στην ίδια περιοχή εάν μπορούν να επικοινωνήσουν μεταξύ τους χωρίς τη χρήση πυλών DTN, αλλά με τη χρήση των πρωτοκόλλων της περιοχής που ανήκουν. Οι πύλες DTN αντιστοιχούν τόσο στην έννοια "waypoint" του Metanet στο [17] όσο και στην έννοια των πυλών που περιγράφονται στην αρχική σχεδίαση ARPANET [18] [19]. Κάθε πύλη DTN η οποία κινείται μεταξύ δύο περιοχών αποτελείται από δύο "μισά", καθένα από τα οποία ανήκει σε μία από τις γειτονικές περιοχές . ...
... Η διεύθυνση υποδικτύου (subnet) του network1 όπως φαίνεται στο Σχήμα 5.2 είναι 172. 18 Τέλος, δημιουργούμε ένα script που αυτοματοποιεί τη διακοπτόμενη επικοινωνία των container κατά την υλοποίηση των πειραμάτων. Αρχικά, αποσυνδέουμε τα container consumer και producer, ώστε ο Intermediate να βρίσκεται στη κατάσταση αποσύνδεσης εξ αρχής. ...
... At the same time, the Internet began to be used as a bilateral teleoperation system communication channel. The Internet uses packet switching protocol [33], so the Internet's time delay is random. The existing delay analysis method cannot analyze the random time delay. ...
... where (33), V (t) ≥ 0, and S(t) ≥ 0 can be obtained. Therefore, the master-slave manipulator model satisfies r passive condition, as shown in (35): ...
Article
Bilateral teleoperation robots with force feedback enable humans to accomplish these tasks without exposing them to these hazardous environments. Its stability and transparency describe the performance of bilateral teleoperation systems with force feedback. Bilateral teleoperation with force feedback enables humans to combine tactics with optesthesia. However, the force feedback may lead to bilateral teleoperation instability if the communication channels' time delay exists. The instability of bilateral teleoperation with force feedback, which is brought in by the time delay, has become one of the complicated problems researchers need to solve. Transparency is one of the leading design objectives of the teleoperation system. There are two evaluation criteria for transparency: the accuracy of the position followed by the master mechanical arm and the accuracy of the feedback received by the slave arm from the master arm. The main content of this paper is as follows: 1) This paper researches and summarizes the control structures and control algorithms of several well-developed force-feedback bilateral teleoperation systems and decides to improve the PBTDPA algorithm, which aligns with practical application requirements. 2) The four-channel structure makes the transparency of force-feedback bilateral teleoperation systems perfect in theory. This paper uses the four-channel structure combined with the PBTDPA algorithm to improve the transparency of the approach. 3) Moreover, the delay predictor is used to improve the four-channel power-based time domain passivity approach (PBTDPA) control strategy. The delay differential predictor is added to the communication channel. The delay change rate differential predictor can estimate the communication channel's delay change rate instead of the maximum delay change rate to improve transparency. The simulation experiment of the improved control strategy was carried out. The results show the excellent performance of our design.
... This protocol regulates the rules of communication between computers, enabling data exchange and communication over a network. This technology was developed in 1974 by Vinton Cerf and Robert Kahn (Cerf & Kahn, 1974). year (Jones, 2002, p. 14) and the strategy behind it may have inspired Kissinger's statements. ...
Chapter
Full-text available
The emergence of new communication technologies in the 1990s catalyzed a shift in global power dynamics and societal aspirations towards democracy and peace. Francis Fukuyama's "end of history" thesis, proposing the convergence of free markets and democracy, coincided with the rise of new media technologies, reshaping political discourse. This article investigates whether new media innovation signals the culmination of history or signifies a new chapter in the information society. By exploring the reinvention of labor, value, and tools in the transition to an information society, the study examines the socio-economic implications of new communication technologies. Furthermore, it delves into the relationship between technology, science, and politics, analyzing historical events like the Berlin Blockade and the development of ARPANET. The study also scrutinizes the consequences of innovation on policy, particularly focusing on patent laws and ownership rights. Ultimately, the article concludes that while new media innovation disrupts established orders and fosters social transformation, it does not signify the end of history. Rather, it underscores the ongoing evolution of societal structures and power dynamics in the information age, where technology plays a central role in shaping human interaction, values, and consciousness.
... The deployment of Holo-Cloud uses TCP protocol for network communication. TCP is one of the most frequently used protocols within digital network communications and ensures end-to-end data delivery [46]. All outbound TCP/IP traffic originating from the FI3D server to the internet was permitted during the experiments, facilitating the necessary functions of the software and operating system. ...
Article
Full-text available
Background: The ever-growing extended reality (XR) technologies offer unique tools for the interactive visualization of images with a direct impact on many fields, from bioinformatics to medicine, as well as education and training. However, the accelerated integration of artificial intelligence (AI) into XR applications poses substantial computational processing demands. Additionally, the intricate technical challenges associated with multilocation and multiuser interactions limit the usability and expansion of XR applications. Methods: A cloud deployable framework (Holo-Cloud) as a virtual server on a public cloud platform was designed and tested. The Holo-Cloud hosts FI3D, an augmented reality (AR) platform that renders and visualizes medical 3D imaging data, e.g., MRI images, on AR head-mounted displays and handheld devices. Holo-Cloud aims to overcome challenges by providing on-demand computational resources for location-independent, synergetic, and interactive human-to-image data immersion. Results: We demonstrated that Holo-Cloud is easy to implement, platform-independent, reliable, and secure. Owing to its scalability, Holo-Cloud can immediately adapt to computational needs, delivering adequate processing power for the hosted AR platforms. Conclusion: Holo-Cloud shows the potential to become a standard platform to facilitate the application of interactive XR in medical diagnosis, bioinformatics, and training by providing a robust platform for XR applications.
... Hardware-Based Packet Processing [9] Understandability: Hardware-based packet processing often occurs at high speeds and involves specialized networking hardware, such as routers and switches. The understandability of packet processing in such systems primarily lies in configuring and managing the hardware devices. ...
Experiment Findings
Full-text available
In today's modern educational institutions continuous aiming to enhance scalability and adaptability to meet the growing demands of digital education. In this research paper reviewed the literatures which is for extensible network model development, in this literature review paper focused on network architecture into educational institutions. In research I required to learn and generate the architecture or a model for the network which helps me to extend the research which is required for the development of network model. Here research paper learned for the network architecture, IP addressing, IP addressing lookup, IPv4 and IPv6 implementations, packet sending and receiving and packet generations which gives the idea to get or pass the information or a signal to get accessibility of the system informations from the centralized systems or from the particular system.
... Any valid perturbation to these bytes will result in a final window size value between 1-65535. This directly impacts byte numbers 15 and 16 of the TCP header, and indirectly impacts byte 17 and 18 of the TCP header, which represent the TCP checksum, similar to IP checksum Cerf and Kahn [1974]. ...
Preprint
Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms, coupled with the availability of faster computing infrastructure, have enhanced the security posture of cybersecurity operations centers (defenders) through the development of ML-aided network intrusion detection systems (NIDS). Concurrently, the abilities of adversaries to evade security have also increased with the support of AI/ML models. Therefore, defenders need to proactively prepare for evasion attacks that exploit the detection mechanisms of NIDS. Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations. Perturbations made to the flow-based features are difficult to reverse-engineer, while samples generated with perturbations to the packet-based features are not playable. Our methodological framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature. By taking raw malicious network packets as inputs and systematically making perturbations on them, Deep PackGen camouflages them as benign packets while still maintaining their functionality. In our experiments, using publicly available data, Deep PackGen achieved an average adversarial success rate of 66.4\% against various ML models and across different attack types. Our investigation also revealed that more than 45\% of the successful adversarial samples were out-of-distribution packets that evaded the decision boundaries of the classifiers. The knowledge gained from our study on the adversary's ability to make specific evasive perturbations to different types of malicious packets can help defenders enhance the robustness of their NIDS against evolving adversarial attacks.
... We assume that fixed-width fields and variable-length payloadscomposed of one or more fields-start at fixed offsets from the start of a message. This representation choice facilitates efficient and unambiguous deserialization, a desirable feature for exchanging binary data that is widely used, for example in IP [22], UDP [54], and BGP [55] protocols and ASN.1 BER serializations [37], [30]. To handle cases where this assumption does not hold, such as protocols with union types, BinaryInferno is tuned to not lead analysts astray by avoiding false positives. ...
... The history of the Internet and its associated technologies has been well researched [34,35]. The transition from circuit-switched to packet-switched data [36], followed by advances in networking large and expensive computers with Interface Messaging Processors (IMPs) [37], and eventually protocol and software suites such as the Transmission Control Protocol/Internet Protocol (TCP/IP) [38] is a socio-technical story of development that occurs within both civilian and military environments. The development of networked technologies is imbued with the hopes and fears, constraints and freedoms associated with the times in which it was initially developed. ...
Article
Full-text available
The long progress towards universal human rights is regressing. This regression is pronounced within digital spaces once thought to be potential bulwarks of a new era in human rights. But on the contrary, new technologies have given rise to threats that undermine the autonomy, empathy, and dignity of human beings. Early visions of human rights being strengthened by networked technologies have instead crashed into technological realities which not only fail to advance human rights discourses, but rather serve to actively undermine fundamental human rights in countries around the world. The future of human rights is increasingly threatened by advances that would make George Orwell blush. Omnipresent data collection and algorithmic advances once promising a utopian world of efficiency and connection are deeply interwoven with challenges to anonymity, privacy, and security. This paper examines the impact of technological advances on the regression of human rights in digital spaces. The paper examines the development of human rights through changes in concepts of autonomy, empathy, and dignity, it charts their regression as technologies are used to increasingly prey on these very same characteristics that un-dergird human rights discourses.
... A second change is the decoupling of transport protocols from the network layer. In the initial design of the host-to-host protocol for the ARPANet [15], the network and the transport layer were strongly coupled. It gave birth to TCP [67] and IPv4 [66] which kept some interdependence as the TCP checksum is computed using a pseudo-header including the source and destination IP addresses. ...
Article
Full-text available
The Internet use IP addresses to identify and locate network interfaces of connected devices. IPv4 was introduced more than 40 years ago and specifies 32-bit addresses. As the Internet grew, available IPv4 addresses eventually became exhausted more than ten years ago. The IETF designed IPv6 with a much larger addressing space consisting of 128-bit addresses, pushing back the exhaustion problem much further in the future. In this paper, we argue that this large addressing space allows reconsidering how IP addresses are used and enables improving, simplifying and scaling the Internet. By revisiting the IPv6 addressing paradigm, we demonstrate that it opens up several research opportunities that can be investigated today. Hosts can benefit from several IPv6 addresses to improve their privacy, defeat network scanning, improve the use of several mobile access network and their mobility as well as to increase the performance of multicore servers. Network operators can solve the multihoming problem more efficiently and without putting a burden on the BGP RIB, implement Function Chaining with Segment Routing, differentiate routing inside and outside a domain given particular network metrics and offer more fine-grained multicast services.
... Rateless codes, in which reliable decoding does not occur at a predetermined time (i.e., blocklength) n but may vary depending on the channel realization, are somewhat related to our problem. See, for example, [4], [5], [6] in the context of the erasure channel, [8], [9], [10], [11], [12], [13] in the context of adaptive routing protocols for networks, and [14], [15], [16], [17] in the context of discrete memoryless networks such as multiple access, relay, and broadcast channels. ...
Article
The traditional notion of capacity studied in the context of memoryless network communication builds on the concept of block-codes and requires that, for sufficiently large blocklength n, all receiver nodes simultaneously decode their required information after n channel uses. In this work, we generalize the traditional capacity region by exploring communication rates achievable when some receivers are required to decode their information before others, at different predetermined times; referred here as the "time-rate" region. Through a reduction to the standard notion of capacity, we present an inner-bound on the time-rate region. The time-rate region has been previously studied and characterized for the memoryless broadcast channel (with a sole common message) under the name "static broadcasting".
... Current Internet communication relies on the IP protocol [12] that assumes a host-centric networking model. Host-centric communication models enable communication between well-defined locations. ...
Thesis
The amount of data exchanged over the Internet has grown drastically over the past decades. The increasing number of users, connected devices, and the popularity of video content have surged the demand for new communication methods that can deal with the growing volume of data traffic. Information-Centric Networking (ICN) has been proposed as an alternative to traditional IP-based networks. In ICN, consumers request named content via Interest packets to the network and receive data as a response to their request without taking care of the location of the content in the network. ICN allows in-network caching and naturally supports the use of multiple paths. Nevertheless, the maximum throughput can only be achieved if the content is requested over an optimal set of multicast trees. The computation of such multicast trees is hard to scale over large dynamic networks and requires coordination among network entities. Network coding has been recently introduced in ICN to improve multi-path dissemination and caching of content without the need for coordination. The challenge in the case of network coding is to get independent coded content in response to multiple parallel Interests by one or several consumers. In this thesis, we analyze some previous works that integrate network coding and ICN and identify some key issues these works face. We introduce an efficient solution where clients add compact information to Interest packets in order to ensure linear independence of content in network-coded ICN. This thesis proposes an architecture, MICN, that provides network coding on top of an Interest-based ICN implementation: Named Data Networking (NDN). The proposed architecture helps alleviate the issues faced by network coding-enabled ICN solutions presented in the past. A novel construction called MILIC (Multiple Interests for Linearly Independent Content) is introduced that imposes constraints on how the replies to Interests are coded, intending to get linearly independent contents in response to multiple Interests. Numerical analysis and simulations illustrate that the MILIC construction performs well with network-coded NDN, and the MICN protocol yields close to optimal throughput in some scenarios. The performance of MICN compares favorably to existing protocols. It shows significant benefits when considering the total number of transmitted packets in the network and in the case of lossy links. Several modified forwarding techniques integrated into the MICN protocol are proposed to optimize the network resource utilization while keeping a high throughput. MILIC led us to consider the problem of constructing subsets of vectors from a given vector space, such that when drawing arbitrarily one vector from each subset, the selected vectors are linearly independent. This thesis considers it as a mathematical problem and studies some alternative solutions to the MILIC construction. Finally, the thesis proves that a large family of solutions to this problem are equivalent to MILIC.
... A network protocol defines the communication rules between one or more entities, and the protocol specification is usually composed of vocabulary and grammar [1]. Vocabulary is a collection of messages and their formats, and grammar defines all the process rules. ...
Article
Full-text available
This paper analyzes the typical abnormal behaviors such as hidden and invisible attacks of network protocols and proposes a new method to perceive and mine the abnormal behaviors of protocols so as to evaluate the operational security of protocols. Aiming at the problems of long latency and difficulty to expose the hidden behavior of network protocols, a scheme combining dynamic stain analysis and instruction clustering analysis is proposed. The proposed method cannot only quickly distinguish the hidden behavior but also accurately grasp the hidden behavior instruction sequence. Experimentation results show that the proposed method can mine unknown protocols for abnormal behavior with high accuracy.
... It provides the communication service for upper application layers, such as connection-oriented data streaming, reliability and congestion control. The well-known transport protocols of the Internet include the connection-oriented Transmission Control Protocol (TCP) [26] and the connectionless User Datagram Protocol (UDP) [95]. ...
Preprint
Full-text available
Applications running in geographically distributed setting are becoming prevalent. Large-scale online services often share or replicate their data into multiple data centers (DCs) in different geographic regions. Driven by the data communication need of these applications, inter-datacenter network (IDN) is getting increasingly important. However, we find congestion control for inter-datacenter networks quite challenging. Firstly, the inter-datacenter communication involves both data center networks (DCNs) and wide-area networks (WANs) connecting multiple data centers. Such a network environment presents quite heterogeneous characteristics (e.g., buffer depths, RTTs). Existing congestion control mechanisms consider either DCN or WAN congestion, while not simultaneously capturing the degree of congestion for both. Secondly, to reduce evolution cost and improve flexibility, large enterprises have been building and deploying their wide-area routers based on shallow-buffered switching chips. However, with legacy congestion control mechanisms (e.g., TCP Cubic), shallow buffer can easily get overwhelmed by large BDP (bandwidth-delay product) wide-area traffic, leading to high packet losses and degraded throughput. This thesis describes my research efforts on optimizing congestion control mechanisms for the inter-datacenter networks. First, we design GEMINI - a practical congestion control mechanism that simultaneously handles congestions both in DCN andWAN. Second, we present FlashPass - a proactive congestion control mechanism that achieves near zero loss without degrading throughput under the shallow-buffered WAN. Extensive evaluation shows their superior performance over existing congestion control mechanisms.
... For this computer to be able to communicate, they need common "languages" named as communication protocols. The protocol suite (TCP/IP) [Cerf 1974] -a conceptual model developed in the 70sis the de facto standard used by the Internet to communicate between networks and devices. Internet is a network of networks. ...
Thesis
With the exponential growth in technology performance, the modern world has become highly connected, digitized, and diverse. Within this hyper-connected world, Communication networks or the Internet are part of our daily life and play many important roles. However, the ever-growing internet services, application, and massive traffic growth complexify networks that reach a point where traditional management functions mainly govern by human operations fail to keep the network operational. In this context, Software-Defined Networking (SDN) emerge as a new architecture for network management. It makes networks programmable by bringing flexibility in their control and management. Even if network management is eased, it is still tricky to handle due to the continuous growth of network complexity. Management tasks remain then complex. Faced with this, the concept of self-driving networking arose. It consists of leveraging recent technological advancements and scientific innovation in Artificial Intelligence (AI)/Machine Learning (ML) with SDN. Compared to traditional management approaches using only analytic mathematical models and optimization, this new paradigm is a data-driven approach. The management operations will leverage the ML ability to exploit hidden pattern in data to create knowledge. This association SDN-AI/ML, with the promise to simplify network management, needs many challenges to be addresses. Self-driving networking or full network automation is the "Holy Grail" of this association. In this thesis, two of the concerned challenges retain our attention. Firstly, efficient data collection with SDN, especially real-time telemetry. For this challenge, we propose COCO for COnfidence-based COllection, a low overhead near-real-time data collection in SDN. Data of interest is collected efficiently from the data plane to the control plane, where they are used whether by traditional management applications or machine-learning-based algorithms. Secondly, we tackle the effectiveness of the use of machine learning to handle complex management tasks. We consider application performance optimization in data centers. We propose a machine-learning-based incast performance inference, where analytical models struggle to provide general and expert-knowledge-free performance models. With this ML-performance model, smart buffering schemes or other QoS optimization algorithms could dynamically optimize traffic performance. These ML-based management schemes are built upon SDN, leveraging its centralized global view, telemetry capabilities, and management flexibility. The effectiveness of our efficient data collection framework and the machine-learning-based performance optimization show promising results. We expect that improved SDN monitoring with AI/ML analytics capabilities can considerably augment network management and make a big step in the self-driving network journey.
... In the classical internet, Transmission Control Protocol and Internet Protocol (TCP/IP) [27] are the foundational protocols that serve as a methodology of unified, reliable, ordered, and error-checked delivery of classical information stream between applications in the internet. One would expect similar quantum analogues that allow quantum computers on different platforms to interconnect. ...
Article
Full-text available
A quantum network, which involves multiple parties pinging each other with quantum messages, could revolutionize communication, computing and basic sciences. The future internet will be a global system of various packet switching quantum and classical networks and we call it quantum internet. To build a quantum internet, unified protocols that support the distribution of quantum messages within it are necessary. Intuitively one would extend classical internet protocols to handle quantum messages. However, classical network mechanisms, especially those related to error control and reliable connection, implicitly assume that information can be duplicated, which is not true in the quantum world due to the no-cloning theorem and monogamy of entanglement. In this paper, we investigate and propose protocols for packet quantum network intercommunication. To handle the packet loss problem in transport, we propose a quantum retransmission protocol based on the recursive use of a quantum secret sharing scheme. Other internet protocols are also discussed. In particular, the creation of the logical process-to-process connections is accomplished by a quantum version of the three-way handshake protocol.
... Therefore, QEC codes are required and network functionalities have to be designed accordingly making the system complex. Such design complexity is present in every layer of QI design starting from functionality layer of error correcting to the layer of medium access control and route discovery, to the layer-4 protocols such as TCP/IP [2], [323]. ...
Article
Full-text available
The advanced notebooks, mobile phones, and internet applications in today’s world that we use are all entrenched in classical communication bits of zeros and ones. Classical internet has laid its foundation originating from the amalgamation of mathematics and Claude Shannon’s theory of information. However, today’s internet technology is a playground for eavesdroppers. This poses a serious challenge to various applications that rely on classical internet technology, and it has motivated the researchers to switch to new technologies that are fundamentally more secure. By exploring the quantum effects, researchers paved the way into quantum networks that provide security, privacy, and range of capabilities such as quantum computation, communication, and metrology. The realization of Quantum Internet (QI) requires quantum communication between various remote nodes through quantum channels guarded by quantum cryptographic protocols. Such networks rely upon quantum bits (qubits) that can simultaneously take the value of zeros and ones. Due to the extraordinary properties of qubits such as superposition, entanglement, and teleportation, it gives an edge to quantum networks over traditional networks in many ways. At the same time, transmitting qubits over long distances is a formidable task and extensive research is going on satellite-based quantum communication, which will deliver breakthroughs for physically realizing QI in near future. In this paper, QI functionalities, technologies, applications and open challenges have been extensively surveyed to help readers gain a basic understanding of the infrastructure required for the development of the global QI.
... • Gateway layer -There are broadly two-fold arguments behind the inclusion of this layer in the architecture stack: One is motivated by the DARPA project [11,14] in which the notion of gateways is defined for connecting distinguishable networks, and the other is to provide end-to-end communication between disparate blockchains. Hence, the protocols designed under this layer should define the configuration that supports cross-blockchain routing for transferring messages. ...
Article
Full-text available
Despite the enormous number of online docking services available, consumers sometimes struggle to discover the services they require from time to time. On the other hand, when finding matching or recommendation platforms from an academic or industry perspective, most of the related work they can find is centralized systems. Unfortunately, the centralized systems often have shortages, such as adv-driven, lack of trust, non-transparency, and unfairness. The authors propose a peer-to-peer (P2P) service network for service discovery and recommendation. ServiceNet is a blockchain-based service ecosystem that promises to provide an open, transparent, self-growing, and self-managing service environment. The article will provide the basic concept, the proto-architecture type's design, and the proto-initial type's implementation and performance assessment.
... For this computer to be able to communicate, they need common "languages" named as communication protocols. The protocol suite (TCP/IP) [Cerf 1974] -a conceptual model developed in the 70sis the de facto standard used by the Internet to communicate between networks and devices. Internet is a network of networks. ...
... • Gateway layer -There are broadly two-fold arguments behind the inclusion of this layer in the architecture stack: One is motivated by the DARPA project [11,14] in which the notion of gateways is defined for connecting distinguishable networks, and the other is to provide end-to-end communication between disparate blockchains. Hence, the protocols designed under this layer should define the configuration that supports cross-blockchain routing for transferring messages. ...
Article
Unprecedented attention towards blockchain technology is serving as a game-changer in fostering the development of blockchain-enabled distinctive frameworks. However, fragmentation unleashed by its underlying concepts hinders different stakeholders from effectively utilizing blockchain-supported services, resulting in the obstruction of its wide-scale adoption. To explore synergies among the isolated frameworks requires comprehensively studying inter-blockchain communication approaches. These approaches broadly come under the umbrella of Blockchain Interoperability (BI) notion, as it can facilitate a novel paradigm of an integrated blockchain ecosystem that connects state-of-the-art disparate blockchains. Currently, there is a lack of studies that comprehensively review BI, which works as a stumbling block in its development. Therefore, this article aims to articulate potential of BI by reviewing it from diverse perspectives. Beginning with a glance of blockchain architecture fundamentals, this article discusses its associated platforms, taxonomy, and consensus mechanisms. Subsequently, it argues about BI’s requirement by exemplifying its potential opportunities and application areas. Concerning BI, an architecture seems to be a missing link. Hence, this article introduces a layered architecture for the effective development of protocols and methods for interoperable blockchains. Furthermore, this article proposes an in-depth BI research taxonomy and provides an insight into the state-of-the-art projects. Finally, it determines possible open challenges and future research in the domain.
... TCP is a widely-used transport protocol on the Internet and DCNs [7,16]. We adopt TCP as the transport mechanism in the load balancing scheme to keep the transport layer as simple as possible. ...
Article
In this paper, we evaluate the performance of packet-based load balancing in data center networks (DCNs). Throughput and flow completion time are some of the main the metrics considered to evaluate the performance of the transport of flows over the presence of long flows in a DCN. Load balancing in a DCN may improve those metrics but it may also generate out-of-order packet forwarding. Therefore, we investigate the impact of outof- order packet delivery on the throughput and flow completion time of long and short flows, respectively, in aDCN.We focus on per-packet load balancing. Our simulations show the presence of out-of-order packet delivery in a DCN using this load balancing approach. Simulation results also reveal that packetbased load balancing may yield smaller average flow completion time for short flows and larger average throughput for long flows than the single-path transport model used byTransmission Control Protocol (TCP), which prevents the presence of out-of-order packet delivery. Queueing diversity in the multipath structure of DCNs promotes susceptibility of out-of-order delivery. As the delay difference between alternative paths decreases, the occurrence of out-of-order packet delivery in packet-based load balancing also decreases. Therefore, under the studied scenarios, the benefits of the packet-based load balancing seem to outweigh the out-of-order problem.
... It may concurrently submit proposals over those slots without waiting for confirmations, provided it does not wander further than Γ slots ahead of its lastChosen offset. Its privilege is extended as it learns that its prior marked offers were chosen, in a manner that is comparable to the sliding window flow control protocol [23]. SP-Γ requires several amendments to the base protocol. ...
Article
Full-text available
All existing solutions to distributed consensus are organised around a Paxos-like structure wherein processes contend for exclusive leadership in one phase, and then either use their dominant position to propose a value in the next phase or elect an alternate leader. This approach may be characterised as adversarial and phase-asymmetric, requiring distinct message schemas and process behaviours for each phase. In over three decades of research, no algorithm has diverged from this basic model, alluding to it perhaps being the only viable solution to consensus. This paper presents a new consensus algorithm named Spire, characterised by a phase-symmetric, cooperative structure. Processes do not contend for leadership; instead, they collude to iteratively establish a dominant value and may do so concurrently without conflicting. Each successive iteration is structured identically to the previous, employing the same messages and invoking the same behaviour. By these characteristics, Spire buckles the trend in protocol design, proving that at least two disjoint cardinal solutions to consensus exist. The resulting phase symmetry halves the number of distinct messages and behaviours, offering a clear intuition and an approachable foundation for learning consensus and building practical systems.
... It may concurrently submit proposals over those slots without waiting for confirmations, provided it does not wander further than Γ slots ahead of its lastChosen offset. Its privilege is extended as it learns that its prior marked offers were chosen, in a manner that is comparable to the sliding window flow control protocol [23]. ...
Preprint
Full-text available
All existing solutions to distributed consensus are organised around a Paxos-like structure wherein processes contend for exclusive leadership in one phase, and then either use their dominant position to propose a value in the next phase or elect an alternate leader. This approach may be characterised as adversarial and phase-asymmetric, requiring distinct message schemas and process behaviours for each phase. In over three decades of research, no algorithm has diverged from this basic model, alluding to it perhaps being the only viable solution to consensus. This paper presents a new consensus algorithm named Spire, characterised by a phase-symmetric, cooperative structure. Processes do not contend for leadership; instead, they collude to iteratively establish a dominant value and may do so concurrently without conflicting. Each successive iteration is structured identically to the previous, employing the same messages and invoking the same behaviour. By these characteristics, Spire buckles the trend in protocol design, proving that at least two disjoint cardinal solutions to consensus exist. The resulting phase symmetry halves the number of distinct messages and behaviours, offering a clear intuition and an approachable foundation for learning consensus and building practical systems.
... It may concurrently submit proposals over those slots without waiting for confirmations, provided it does not wander further than Γ slots ahead of its lastChosen offset. Its privilege is extended as it learns that its prior marked offers were chosen, in a manner that is comparable to the sliding window flow control protocol [23]. ...
Preprint
Full-text available
All existing solutions to distributed consensus are organised around a Paxos-like structure wherein processes contend for exclusive leadership in one phase, and then either use their dominant position to propose a value in the next phase or elect an alternate leader. This approach may be characterised as adversarial and phase-asymmetric, requiring distinct message schemas and process behaviours for each phase. In over three decades of research, no algorithm has diverged from this basic model, alluding to it perhaps being the only viable solution to consensus. This paper presents a new consensus algorithm named Spire, characterised by a phase-symmetric, cooperative structure. Processes do not contend for leadership; instead, they collude to iteratively establish a dominant value and may do so concurrently without conflicting. Each successive iteration is structured identically to the previous, employing the same messages and invoking the same behaviour. By these characteristics, Spire buckles the trend in protocol design, proving that at least two disjoint cardinal solutions to consensus exist. The resulting phase symmetry halves the number of distinct messages and behaviours, offering a clear intuition and an approachable foundation for learning consensus and building practical systems.
... It includes a number of different verification problems. A first set contains encodings of a variety of communication protocols from [37,14,30,2]: the alternating bit protocol, the positive acknowledgement with retransmission protocol, the bounded retransmission protocol, and the sliding window protocols. The protocols are parameterised with the number of messages to send and, when applicable, the window size. ...
Preprint
Full-text available
We develop an algorithm that combines the advantages of priority promotion - the leading approach to solving large parity games in practice - with the quasi-polynomial time guarantees offered by Parys' algorithm. Hybridising these algorithms sounds both natural and difficult, as they both generalise the classic recursive algorithm in different ways that appear to be irreconcilable: while the promotion transcends the call structure, the guarantees change on each level. We show that an interface that respects both is not only effective, but also efficient.
... One milestone moment in the history of the nascent Internet was the definition of the IP datagram structure [4] with the support of DARPA and their goals for the survivable network [5,6]. The architecture of the Internet viewed each network as an autonomous system (AS), where each AS would operate its own interior routing protocol. ...
Preprint
Full-text available
In the current work we discuss the notion of gateways as a means for interoperability across different blockchain systems. We discuss two key principles for the design of gateway nodes and scalable gateway protocols, namely (i) the opaque ledgers principle as the analogue of the autonomous systems principle in IP datagram routing, and (ii) the externalization of value principle as the analogue of the end-to-end principle in the Internet architecture. We illustrate the need for a standard gateway protocol by describing a unidirectional asset movement protocol between two peer gateways, under the strict condition of both blockchains being private/permissioned with their ledgers inaccessible to external entities. Several aspects of gateways and the gateway protocol is discussed, including gateway identities, gateway certificates and certificate hierarchies, passive locking transactions by gateways, and the potential use of delegated hash-locks to expand the functionality of gateways.
... Rateless codes, in which reliable decoding does not occur at a predetermined time (i.e., blocklength) n but may vary depending on the channel realization, are somewhat related to our problem. See, for example, [4], [5], [6] in the context of the erasure channel, [8], [9], [10], [11], [12], [13] in the context of adaptive routing protocols for networks, and [14], [15], [16], [17] in the context of discrete memoryless networks such as multiple access, relay, and broadcast channels. ...
Preprint
The traditional notion of capacity studied in the context of memoryless network communication builds on the concept of block-codes and requires that, for sufficiently large blocklength n, all receiver nodes simultaneously decode their required information after n channel uses. In this work, we generalize the traditional capacity region by exploring communication rates achievable when some receivers are required to decode their information before others, at different predetermined times; referred here as the "time-rate" region. Through a reduction to the standard notion of capacity, we present an inner-bound on the time-rate region. The time-rate region has been previously studied and characterized for the memoryless broadcast channel (with a sole common message) under the name "static broadcasting".
... Therefore, QEC codes are required and network functionalities have to be designed accordingly making the system complex. Such design complexity is present in every layer of QI design starting from functionality layer of error correcting to the layer of medium access control and route discovery, to the layer-4 protocols such as TCP/IP [2], [317]. ...
Preprint
Full-text available
The advanced notebooks, mobile phones, and internet applications in today's world that we use are all entrenched in classical communication bits of zeros and ones. Classical internet has laid its foundation originating from the amalgamation of mathematics and Claude Shannon's theory of information. But today's internet technology is a playground for eavesdroppers. This poses a serious challenge to various applications that relies on classical internet technology. This has motivated the researchers to switch to new technologies that are fundamentally more secure. Exploring the quantum effects, researchers paved the way into quantum networks that provide security, privacy and range of capabilities such as quantum computation, communication and metrology. The realization of quantum internet requires quantum communication between various remote nodes through quantum channels guarded by quantum cryptographic protocols. Such networks rely upon quantum bits (qubits) that can simultaneously take the value of zeros and ones. Due to extraordinary properties of qubits such as entanglement, teleportation and superposition, it gives an edge to quantum networks over traditional networks in many ways. But at the same time transmitting qubits over long distances is a formidable task and extensive research is going on quantum teleportation over such distances, which will become a breakthrough in physically realizing quantum internet in near future. In this paper, quantum internet functionalities, technologies, applications and open challenges have been extensively surveyed to help readers gain a basic understanding of infrastructure required for the development of global quantum internet.
... It may concurrently submit proposals over those slots without waiting for confirmations, provided it does not wander further than Γ slots ahead of its lastChosen offset. Its privilege is extended as it learns that its prior marked offers were chosen, in a manner that is comparable to the sliding window flow control protocol [23]. ...
Preprint
Full-text available
All existing solutions to distributed consensus are organised around a Paxos-like structure wherein processes contend for exclusive leadership in one phase, and then either use their dominant position to propose a value in the next phase or elect an alternate leader. This approach may be characterised as adversarial and phase-asymmetric, requiring distinct message schemas and process behaviours for each phase. In over three decades of research, no algorithm has diverged from this basic model, alluding to it perhaps being the only viable solution to consensus. This paper presents a new consensus algorithm named Spire, characterised by a phase-symmetric, cooperative structure. Processes do not contend for leadership; instead, they collude to iteratively establish a dominant value and may do so concurrently without conflicting. Each successive iteration is structured identically to the previous, employing the same messages and invoking the same behaviour. By these characteristics, Spire buckles the trend in protocol design, proving that at least two disjoint cardinal solutions to consensus exist. The resulting phase symmetry halves the number of distinct messages and behaviours, offering a clear intuition and an approachable foundation for learning consensus and building practical systems.
Book
Full-text available
Buku ini adalah panduan komprehensif tentang pentingnya literasi digital di tengah kemajuan teknologi. Mengawali dengan definisi dan urgensi literasi digital, pembaca diperkenalkan pada prinsip-prinsip dasar pengembangannya. Buku ini membahas berbagai topik penting seperti media dan teknologi digital, manajemen informasi, edukasi penggunaan teknologi sesuai hukum, kesehatan digital, perkembangan media sosial, content creator, start-up digital, kecerdasan buatan, Internet of Things, blockchain, dan cryptocurrency. Melalui pembahasan ini, pembaca memperoleh pemahaman yang mendalam tentang berbagai aspek penting dalam dunia digital.
Book
In this book, The Rote of fiber optic communication will be discussed, and highlighted by describing the advantages and disadvantages of fiber optic networks. Which has been considered as the major backbone for today's communication networks. Using an optical network for transferring data gaining several benefits including a higher impact of data rate and lower losses and reduces the impact of attenuations. This book consists of five chapters, the first chapter is the demonstration of fiber optics for modern communication and the evaluation of fiber optics. Chapter Two demonstrates the components of fiber optic networks, modes of propagation, and the types of dispersion. Also, different optical amplification techniques are included in this chapter. In the third chapter, it will include the transmission of optical networks, the process of modulation and demodulation, the parts of the optical network which include the transmitter part and the receiver part, also the multiplexing techniques are attached in this chapter, and finally, the types of different optical cable and connectors are clarified in this chapter. In addition, chapter four provides an overview of passive optical networks, the types of point-to-point networks, and point-to-multipoint networks. Types of optical networks-based topology are also clarified in this chapter. Finally, the last chapter will include the demonstration of the future challenges and proposed solutions for the optical-based networks.
Chapter
Full-text available
Zusammenfassung Der Beitrag bezieht technische Protokolle in den Protokollbegriff ein. Wichtiges Beispiel hierfür sind die Internetprotokolle, die den Nachrichtenverkehr zwischen vernetzten Computern regeln. Die Internetprotokolle sorgen als eine Art »Transportmechanismus« für Aufbau und Aufrechterhaltung der Computerverbindungen im Netz. In Gestalt der Internetprotokolle findet der Protokollbegriff also nicht lediglich eine Fortsetzung. Vielmehr wird der Begriff des Protokolls noch ausgeweitet. Technische Protokolle haben nicht mehr nur die Aufgabe, durch Vorschriften einen möglichst geordneten Austausch zu gewährleisten, wie es bei diplomatischen oder höfischen Protokollen der Fall ist. Ohne technische Protokolle wäre der Zusammenschluss von Rechnern nicht nur in schlechter oder ungeordneter Weise, sondern gar nicht möglich gewesen.
Article
We have designed a chat application that involves multiple clients, utilizing a socket module to establish client-server connections. These sockets serve as internal terminals for data transmission and reception. Two sockets are present on every solitary network. TCP sockets are used in the implementation of this application . The socket has been attached to a localhost or a machine port. When a client is involved, a socket connection will be established with the server. The connection will be on the same port as the server-side code. The application is built to allow multiple clients to connect to a central server and exchange messages with each other in real-time. The implementation of the application includes the use of multi-threading, socket programming, and JavaFX for the user interface. The paper discusses the design and implementation of the chat application and provides an evaluation of its performance.
Chapter
Today, depending on the technological transformation, the business world witnessed the rise of new products and services offered digitally to the market. In parallel with this trend, new business models, including new revenue models, cost structures, collaborations, distribution channels, value proposals, customer segmentation suitable for the digital environment are developed. Thus, it requires the harmonization of legislation to include the new commercial system to control and tax the commercial activities of enterprises in digital environments and to operate the legal processes correctly. In this context, the study proposes a blockchain model based on smart contracts that can serve as a basis for the lossless auditing and taxation of digital products and service delivery activities for commercial gain.
Thesis
Full-text available
The web is currently an important propagator of information and content in several areas and sectors of society. In a little more than two decades, it has grown to become a global information medium, a communication tool, and record of life in the 21st century. Although there are perceptions of the alleged solidity of the digital environment, surveys point to the disappearance of websites in short periods of time. This research was concerned with the preservation of digital heritage, specifically the preservation and archiving of webpages. Therefore, the following questions guided the research: where and how are the webpages being preserved? The general objective was to study digital heritage preservation by webpages memory institutions that are part of the International Internet Preservation Consortium – IIPC, based on UNESCO’s Charter on Digital Heritage Preservation. The methodological procedures adopted are exploratory and descriptive research, based on bibliographic and document sources. The universe of the empirical part was delimited to the founding and Ibero-American member institutions of the IIPC. We tried to conceptualize and characterize the preservation of digital heritage and some related topics, such as the evolution of the internet and the web; digital heritage in the context of archives, national libraries, and the IIPC; web archiving and its actors in the international and national context; preservation of digital heritage and the steps for archiving and preserving the web. Furthermore, identifying and characterizing the IIPC’s founding memory-keeping institutions and IberoAmerican ones through frameworks. The policies, actions, and criteria for digital preservation, used by the Ibero-American institutions of the IIPC, were examined with a focus on institutional governance and the problems and barriers to the preservation of webpages were identified, with reflections on those institutions, reaching ten categories of problems and barriers based on the texts of the bibliographic and documentary research and the responses of the researched institutions. It is concluded that national libraries, located in the European continent, lead the task of preserving the web within the researched scope that shares the work with other types of institutions, including national archives. Exceptionally, the task is performed by non-profit organizations or those linked to teaching and research, using the legal deposit framework for the collection of those types of contents, repeating rules applied to analog documents, such as access and use restricted to the institutional physical location. Digital preservation of the web is a challenge for both professionals and institutions, and theory does not always move at the same pace as practice. We conclude that heritage institutions, such as libraries and archives must reflect and react, as UNESCO has well put it, leading this digital heritage preservation movement, because even though the challenge is immense, these complex objects must be considered as part of Brazil's national heritage. Keywords: Digital preservation. Digital heritage. Web archiving. Preservation of the web
Chapter
Medienwissenschaft behandelt mit einem besonderen Augenmerk die historischen Zusammenhänge einzelner Medien und Mediengattungen. Dadurch wird verstehbar, wie ein bestimmtes Medium im jeweils spezifischen zeitgenössischen Kontext aufgekommen ist, welchen Wandlungen es unterliegt und in welche Richtung sich sein Wachstum bzw. auch sein potenzieller Niedergang im Abgleich mit weiteren Medien bewegt. Mediengeschichte kann, wie es hier vorgeschlagen wird, mit der Geschichte der Schrift und des Schreibens einsetzen, Bilder, Telegrafie und Telekommunikation skizzieren, Hörfunk, Film und Fernsehen umreißen, um bei Neuen Medien anzugelangen. Mediengeschichte ist dabei, wie deutlich werden kann, das notwendige Vehikel, um das Wesen von Medien zu begreifen; sie ist eine der wesentlichen Voraussetzungen medienkulturwissenschaftlicher Tätigkeit.
Article
Full-text available
The purpose of this article is to discuss an ethnography of code, specifically code ethnography, a method for examining code as a socio-technical actor, considering its social, political, and economic dynamics in the context of digital infrastructures. While it can be applied to any code, the article presents the results of code ethnography application in the study of internet interconnection dynamics, having the Border Gateway Protocol (BGP) as code and two of the largest internet exchange points (IXPs) in the world as points of data collection, DE-CIX Frankfurt, and IX.br São Paulo. The results show inequalities in the flows of information between the global North and the global South and concentration of power at the level of interconnection infrastructure hitherto unknown in the context of the political economy of the internet. Code ethnography is explained in terms of code assemblage, code literacy, and code materiality. It demonstrates the grammar of BGP in context, making its logical and physical dimensions visible in the analysis of the formation of giant internet nodes and infrastructural interdependencies in the circulation information infrastructure of the internet.
Chapter
In this chapter, the authors discuss various practical issues which are of fundamental importance to the development and deployment of digital communication systems. Perhaps the most important issue to be addressed by any system designer is that related to user authentication. An emerging area of communications that is quickly expanding and for which data security and strong authentication are fundamental, is the so‐called e‐government, that is, all the on‐line exchanges between any level of government and its citizenry. Authentication is a key component for securing data and communication systems. The fundamental parts that make all these systems work together and transport the information without errors is what we call communication protocols such as the TCP/IP protocol. The authors also discuss some of the topics that are generating intense debate and are relevant to the application of cryptography.
Chapter
Full-text available
The purpose of this chapter is to address challenges related to the integration and implementation of the developing internet of things (IoT) into the daily lives of people. Demands for communication between devices, sensors, and systems are reciprocally driving increased demands for people to communicate and manage the growing digital ecosystem of the IoT and an unprecedented volume of data. A larger study was established to explore how digital transformation through unified communication and collaboration (UC&C) technologies impact the productivity and innovation of people in the context of one of the world’s largest automotive enterprises, General Motors (GM). An analysis and exploration of this research milieu, supported by a critical realist interpretation of solutions, suggested that recommendations can be made that the integration and implementation of digital transformation, delivered via UC&C technologies, impact productivity and opportunity for driving innovation within a global automotive enterprise.
Article
What if instead of having to implement controversial user tracking techniques, Internet advertising & marketing companies asked explicitly to be granted access to user data by name and category, such as Alice→Mobility→05-11-2020? The technology for implementing this already exists, and is none other than the Information Centric Networks (ICN), developed for over a decade in the framework of Next Generation Internet (NGI) initiatives. Beyond named access to personal data, ICN's in-network storage capability can be used as a substrate for retrieving aggregated, anonymized data, or even for executing complex analytics within the network, with no personal data leaking outside. In this opinion article we discuss how ICNs combined with trusted execution environments and digital watermarking, can be combined to build a personal data overlay inter-network in which users will be able to control who gets access to their personal data, know where each copy of said data is, negotiate payments in exchange for data, and even claim ownership, and establish accountability for data leakages due to malfunctions or malice. Of course, coming up with concrete designs about how to achieve all the above will require a huge effort from a dedicated community willing to change how personal data are handled on the Internet. Our hope is that this opinion article can plant some initial seeds towards this direction.
Article
Full-text available
Distributed systems have been an active field of research for over 60 years, and has played a crucial role in computer science, enabling the invention of the Internet that underpins all facets of modern life. Through technological advancements and their changing role in society, distributed systems have undergone a perpetual evolution, with each change resulting in the formation of a new paradigm. Each new distributed system paradigm—of which modern prominence include cloud computing, Fog computing, and the Internet of Things (IoT)—allows for new forms of commercial and artistic value, yet also ushers in new research challenges that must be addressed in order to realize and enhance their operation. However, it is necessary to precisely identify what factors drive the formation and growth of a paradigm, and how unique are the research challenges within modern distributed systems in comparison to prior generations of systems. The objective of this work is to study and evaluate the key factors that have influenced and driven the evolution of distributed system paradigms, from early mainframes, inception of the global inter-network, and to present contemporary systems such as edge computing, Fog computing and IoT. Our analysis highlights assumptions that have driven distributed systems appear to be changing, including (1) an accelerated fragmentation of paradigms driven by commercial interests and physical limitations imposed by the end of Moore’s law, (2) a transition away from generalized architectures and frameworks towards increasing specialization, and (3) each paradigm architecture results in some form of pivoting between centralization and decentralization coordination. Finally, we discuss present day and future challenges of distributed research pertaining to studying complex phenomena at scale and the role of distributed systems research in the context of climate change.
Article
Full-text available
In this paper a computer network is defined to be a set of autonomous, independent computer systems, interconnected so as to permit interactive resource sharing between any pair of systems. An overview of the need for a computer network, the requirements of a computer communication system, a description of the properties of the communication system chosen, and the potential uses of such a network are described in this paper.
Article
Full-text available
For many years, small groups of computers have been interconnected in various ways. Only recently, however, has the interaction of computers and communications become an important topic in its own right. In 1968, after considerable preliminary investigation and discussion, the Advanced Research Projects Agency of the Department of Defense (ARPA) embarked on the implementation of a new kind of nationwide computer interconnection known as the ARPA Network. This network will initially interconnect many dissimilar computers at ten ARPA-supported research centers with 50-kilobit common-carrier circuits. The network may be extended to include many other locations and circuits of higher bandwidth.
Article
Full-text available
A collection of basic ideas is presented, which have been involved by various workers over the past four years to provide a suitable framework for the design and analysis of multiprocessing systems. The notions of process and state vector are discussed, and that the nature of basic operations on processes is considered. Some of the connections between processes and protection are analyzed. A very general approach to priority-oriented scheduling is described, and its relationship to conventional interrupt systems is explained. Some aspects of time-oriented scheduling are considered. The implementation of the scheduling mechanism is analyzed in detail and the feasibility of embodying it in hardware established. Finally several methods for interlocking execution of independent processes are presented and compared.
Article
A computer network is being developed in France, under government sponsorship, to link about twenty heterogeneous computers located in universities, research centers, and data processing centers. Goals are to set up a prototype network in order to foster experiment in various areas, such as: data communications, computer interaction, cooperative research distributed data bases. In order to speed up the implementation, standard equipment is used, and modifications to operating systems are minimized. Rather, the design effort bears on a carefully layered architecture, allowing for a gradual insertion of specialized protocols and services tailored to specific application and user classes. Host-host protocols, as well as error and flow control mechanisms are based on a simple message exchange procedure, on top of which various options may be built for the sake of efficiency, error recovery, or convenience. Depending on available computer resources, these options can be implemented as user software, system modules, or front-end processor packages. CYCLADES uses a packet-switching sub-network, which is a transparent message carrier, completely independent of host-host conventions. While in many ways similar to ARPANET, it presents some distinctive differences in address and message handling, intended to facilitate interconnection with other networks. In particular, addresses can have variable formats, and messages are not delivered in sequence, so that they can flow out of the network through several gates toward an outside target.
Article
An experimental store-and-forward data communication network has been set up within the National Physical Laboratory (NPL) site. The system represents one element of a national data network scheme proposed by NPL. The network is currently offering a data communication service on a trial basis and is operating successfully. Work on an enhanced communication system is in hand. This new system has been organised along strictly hierarchical lines and is intended to meet the requirements of computer to computer communications in a general manner, permitting resource-sharing applications and remote-access computer services to be developed in the Laboratory.
Article
The Advanced Research Projects Agency (ARPA) Computer Network (hereafter referred to as the "ARPA network") is one of the most ambitious computer networks attempted to date. The types of machines and operating systems involved in the network vary widely. For example, the computers at the first four sites are an XDS 940 (Stanford Research Institute), an IBM 360/75 (University of California, Santa Barbara), an XDS SIGMA-7 (University of California, Los Angeles), and a DEC PDP-10 (University of Utah). The only commonality among the network membership is the use of highly interactive time-sharing systems; but, of course, these are all different in external appearance and implementation. Furthermore, no one node is in control of the network. This has insured generality and reliability but complicates the software.
Article
In this paper, we discuss flow control in a resourcesharing computer network. The resources consist of a set of inhomogeneous computers called hosts that are geographically distributed and are interconnected by a store-and-forward communications subnet. In the communication process, messages pass between hosts via the subnet. A protocol is used to control the flow of messages in such a way as to efficiently utilize the subnet and the host resources. In this paper, we examine in some detail the nature of the flow control required in the subnet and its relation to the host flow control and subnet performance.
Article
Marketing surveys on data communication have indicated the possible need for a new data service. Technical studies have resulted in a number of detailed proposals for providing a service which, although functionally distinct, is physically integrated with other telecommunications services. The proposed network utilizes synchronous digital transmission with a processor-controlled TDM switching structure. A range of speeds is available. Users' data bytes are structured in "envelopes" to provide an inband control signaling facility. The use of a processor for switch control gives greater flexibility, allows the implementation of sophisticated diagnostics, and, with high-speed interswitch signaling, reduces overall call set-up times. The proposed network is hybrid in that it can operate in a conventional circuit switched mode, or operate in a "packet" switched mode. Extensive multiplexing is used to increase transmission utilization and reduce local area transmission costs.
Article
A system of communication between processes in a time-sharing system is described and the communication system is extended so that it may be used between processes distributed throughout a computer network. The hypothetical application of the system to an existing network is discussed.
Article
The development of resource-sharing networks can facilitate the provision of a wide range of economic and reliable computer services. Computer-communication networks allow the sharing of specialized computer resources such as data bases, programs, and hardware. Such a network consists of both the computer resources and a communications system interconnecting them and allowing their full utilization to be achieved. In addition, a resource-sharing network provides the means whereby increased cooperation and interaction can be achieved between individuals. An introduction to computer-to-computer networks and resource sharing is provided and some aspects of distributed computation are discussed.
HosT/HosT protocol design considerations
  • McKenzie A.
A. McKenzie, "HOST/HOST protocol design considerations," INWG Note 16, NIC 13879, Jan. 1973.
Reprinted, with permission Vol Com-22, No 5 May 1974 [9]
  • Ieees Specification
  • S Carr
  • V Crocker
  • Cerf
specification of r© 1974 IEEE. Reprinted, with permission, from IEEE Trans on Comms, Vol Com-22, No 5 May 1974 [9]S. Carr, S. Crocker, and V. Cerf, “HOST-HOST Communication Protocol In the ARPA Network,” in Spring Joint Computer Conf., AFIPS Conf. Proc., vol
HOST/HOST protocol design considerations INWG Note 16, NIC 13879 Resource-sharing computer communication networks
  • A Mckenzie
  • R E Kahn
A. McKenzie, " HOST/HOST protocol design considerations, " INWG Note 16, NIC 13879, Jan. 1973. [17] R. E. Kahn, " Resource-sharing computer communication networks ", Proc. IEEE, vol. 60, pp. 1397-1407, Nov. 1972.
Reprinted, with permission
© 1974 IEEE. Reprinted, with permission, from IEEE Trans on Comms, Vol Com-22, No 5 May 1974
Bolt Beranek and Newman Inc. Cambrldge Mass. BBN Rep. 1822 (revised) Apr. 1973. Bolt Beranek and Newman "Specificatlon for the interconnection of a. host and an IMP
  • Bolt Beranek
  • Newman
The European computer network project," in Computer Communications: Impacts and Implications, S. Winkler, Ed. Washington
  • Barber D. L. A.
A packet switching network wlth graceful saturated operation," in Computer Communications: Impacts and Implications, S. Winkler, Ed. Washington
  • Despres R.
HOST/HOST protocol for the ARPA network " in Current Network Protocols Network Information Cen. Menlo Park Calif. NIC 8246 Jan. 1972. A. McKenzie "HOST/HOST protocol for the ARPA network " in Current Network Protocols Network Information Cen
  • A Mckenzie
Implementation of international data exchange networks," in Computer Communications: Imnacts and Implications. S. Winkler, Ed. Washington
  • Anslow N. G.