Perdagangan-sistem-arsitektur-pdf

Perdagangan-sistem-arsitektur-pdf

Kd-trading-systems
Bagaimana-do-stock-options-work-in-private-company
Mbfx-trading-system-indicators


Online-trading-review-2013 Online-trading-platform-india Online-course-on-stock-trading Technical-analysis-forex-trading-with-candlestick-and-pattern-pdf Options-trading-wikipedia Trade-options-with-scottrade

Arsitektur Lantai Perdagangan Arsitektur Lantai Arsitektur Ikhtisar Eksekutif Meningkatnya persaingan, volume data pasar yang lebih tinggi, dan tuntutan peraturan baru adalah beberapa kekuatan pendorong di balik perubahan industri. Perusahaan berusaha mempertahankan daya saing mereka dengan terus mengubah strategi trading mereka dan meningkatkan kecepatan trading. Arsitektur yang layak harus menyertakan teknologi terbaru dari domain jaringan dan aplikasi. Ini harus modular untuk menyediakan jalur yang dapat dikelola untuk mengembangkan setiap komponen dengan gangguan minimal pada keseluruhan sistem. Oleh karena itu arsitektur yang diusulkan oleh makalah ini didasarkan pada kerangka layanan. Kami memeriksa layanan seperti pesan latency ultra rendah, pemantauan latency, multicast, komputasi, penyimpanan, virtualisasi data dan aplikasi, ketahanan perdagangan, mobilitas perdagangan, dan thin client. Solusi untuk kebutuhan kompleks dari platform perdagangan generasi berikutnya harus dibangun dengan pola pikir holistik, melintasi batas-batas silo tradisional seperti bisnis dan teknologi atau aplikasi dan jaringan. Tujuan utama dokumen ini adalah untuk memberikan panduan untuk membangun platform perdagangan laten ultra-rendah sambil mengoptimalkan throughput mentah dan tingkat pesan untuk data pasar dan pesanan perdagangan FIX. Untuk mencapai hal ini, kami mengusulkan teknologi pengurangan latency berikut: Konektivitas antar-koneksiInfiniBand atau 10 Gbps berkecepatan tinggi untuk cluster perdagangan Bus olahpesan berkecepatan tinggi Akselerasi aplikasi melalui RDMA tanpa kode ulang aplikasi Pemantauan latency real-time dan arahan ulang Lalu lintas perdagangan ke jalur dengan latency minimum Tren Industri dan Tantangan Arsitektur perdagangan generasi mendatang harus merespons tuntutan peningkatan kecepatan, volume, dan efisiensi. Misalnya, volume pilihan data pasar diperkirakan akan berlipat ganda setelah diperkenalkannya opsi penny trading di tahun 2007. Ada juga tuntutan regulasi untuk eksekusi terbaik, yang memerlukan penanganan update harga pada tingkat yang mendekati 1M msgsec. Untuk pertukaran Mereka juga membutuhkan visibilitas terhadap kesegaran data dan bukti bahwa klien mendapatkan eksekusi sebaik mungkin. Dalam jangka pendek, kecepatan perdagangan dan inovasi adalah pembeda utama. Semakin banyak perdagangan ditangani oleh aplikasi perdagangan algoritmik yang ditempatkan sedekat mungkin dengan tempat eksekusi perdagangan. Sebuah tantangan dengan mesin perdagangan quotblack-boxquot ini adalah bahwa mereka menambah kenaikan volume dengan mengeluarkan perintah hanya untuk membatalkannya dan mengirimkannya kembali. Penyebab perilaku ini adalah kurangnya visibilitas ke tempat mana yang menawarkan eksekusi terbaik. Pedagang manusia sekarang adalah insinyur quotfinancial, mengutip kuquantquot (analis kuantitatif) dengan keterampilan pemrograman, yang dapat menyesuaikan model perdagangan dengan cepat. Perusahaan mengembangkan instrumen keuangan baru seperti derivatif cuaca atau perdagangan kelas lintas-aset dan mereka perlu menerapkan aplikasi baru dengan cepat dan dengan cara yang terukur. Dalam jangka panjang, diferensiasi kompetitif harus berasal dari analisis, bukan hanya pengetahuan. Para pedagang bintang di masa depan mengambil risiko, mencapai wawasan klien sejati, dan secara konsisten mengalahkan pasar (sumber IBM: www-935.ibmservicesusimcpdfge510-6270-trader.pdf). Ketahanan bisnis telah menjadi perhatian utama perusahaan perdagangan sejak 11 September 2001. Solusi di bidang ini berkisar dari pusat data yang berlebihan yang berada di berbagai wilayah geografis dan terhubung ke beberapa tempat perdagangan ke solusi pedagang virtual yang menawarkan pedagang listrik sebagian besar fungsi lantai perdagangan. Di lokasi terpencil Industri jasa keuangan adalah salah satu yang paling menuntut dalam hal persyaratan TI. Industri ini mengalami pergeseran arsitektur menuju Services-Oriented Architecture (SOA), layanan Web, dan virtualisasi sumber daya TI. SOA mengambil keuntungan dari peningkatan kecepatan jaringan untuk memungkinkan pengikatan dinamis dan virtualisasi komponen perangkat lunak. Hal ini memungkinkan terciptanya aplikasi baru tanpa kehilangan investasi pada sistem dan infrastruktur yang ada. Konsep ini berpotensi merevolusi cara integrasi dilakukan, memungkinkan pengurangan kompleksitas dan biaya integrasi yang signifikan (gigervasidownloadMerrilLynchGigaSpacesWP.pdf). Tren lain adalah konsolidasi server ke server data center server, sementara meja pedagang hanya memiliki ekstensi KVM dan klien ultra tipis (misalnya solusi blade SunRay dan HP). Jaringan Area Metro berkecepatan tinggi memungkinkan data pasar menjadi multicast di antara lokasi yang berbeda, yang memungkinkan virtualisasi lantai perdagangan. Arsitektur Tingkat Tinggi Gambar 1 menggambarkan arsitektur tingkat tinggi dari lingkungan perdagangan. Pabrik ticker dan mesin perdagangan algoritmik terletak di cluster perdagangan berkinerja tinggi di pusat data perusahaan atau di bursa. Pedagang manusia berada di area aplikasi pengguna akhir. Secara fungsional ada dua komponen aplikasi di lingkungan perdagangan perusahaan, penerbit dan pelanggan. Bus perpesanan menyediakan jalur komunikasi antara penerbit dan pelanggan. Ada dua jenis lalu lintas yang spesifik untuk lingkungan perdagangan: Informasi harga Market DataCarries untuk instrumen keuangan, berita, dan informasi nilai tambah lainnya seperti analisis. Ini searah dan sangat latency sensitif, biasanya disampaikan melalui multicast UDP. Hal ini diukur dalam updatessec. Dan di Mbps. Data pasar mengalir dari satu atau beberapa umpan eksternal, berasal dari penyedia data pasar seperti bursa saham, agregator data, dan ECN. Setiap provider memiliki format data pasar tersendiri. Data tersebut diterima oleh penangan umpan, aplikasi khusus yang menormalkan dan membersihkan data dan kemudian mengirimkannya ke konsumen data, seperti mesin harga, aplikasi perdagangan algoritmik, atau pedagang manusia. Perusahaan sisi penjualan juga mengirim data pasar ke klien mereka, perusahaan sisi beli seperti reksadana, hedge fund, dan manajer aset lainnya. Beberapa perusahaan penjual beli mungkin memilih untuk menerima umpan langsung dari bursa, mengurangi latensi. Gambar 1 Arsitektur Perdagangan untuk Side Side Side Beli Tidak ada standar industri untuk format data pasar. Setiap pertukaran memiliki format kepemilikan mereka. Penyedia konten keuangan seperti Reuters dan Bloomberg mengumpulkan beragam sumber data pasar, menormalisasi, dan menambahkan berita atau analisis. Contoh feed konsolidasi adalah RDF (Reuters Data Feed), RWF (Reuters Wire Format), dan Bloomberg Professional Services Data. Untuk memberikan data pasar latency yang lebih rendah, kedua vendor telah merilis umpan data pasar real-time yang kurang diproses dan kurang memiliki analisis: B-PipeWith Bloomberg B-Pipe, Bloomberg mengganti data pasar mereka dari platform distribusi mereka karena sebuah terminal Bloomberg Tidak diperlukan untuk mendapatkan B-Pipe. Wombat dan Reuters Feed Handlers telah mengumumkan dukungan untuk B-Pipe. Perusahaan mungkin memutuskan untuk menerima umpan langsung dari pertukaran untuk mengurangi latensi. Keuntungan dalam kecepatan transmisi bisa berkisar antara 150 milidetik hingga 500 milidetik. Umpan ini lebih kompleks dan lebih mahal dan perusahaan harus membangun dan memelihara pabrik ticker mereka sendiri (financetechfeaturedshowArticle.jhtmlarticleID60404306). Orders Perdagangan Trafik jenis ini membawa perdagangan yang sebenarnya. Ini bersifat bi-directional dan sangat latency sensitive. Hal ini diukur dalam messagessec. Dan Mbps. Perintah berasal dari sisi beli atau sisi menjual perusahaan dan dikirim ke tempat perdagangan seperti Exchange atau ECN untuk eksekusi. Format yang paling umum untuk transportasi pesanan adalah FIX (Financial Information eXchangefixprotocol.org). Aplikasi yang menangani pesan FIX disebut mesin FIX dan mereka berinteraksi dengan sistem manajemen pesanan (OMS). Pengoptimalan ke FIX disebut FAST (Fix Adapted for Streaming), yang menggunakan skema kompresi untuk mengurangi panjang pesan dan, pada dasarnya, mengurangi latency. FAST ditargetkan lebih pada penyampaian data pasar dan berpotensi menjadi standar. FAST juga bisa digunakan sebagai skema kompresi untuk format data pasar proprietary. Untuk mengurangi latensi, perusahaan dapat memilih untuk menetapkan Direct Market Access (DMA). DMA adalah proses otomatis untuk merutekan pesanan sekuritas secara langsung ke tempat eksekusi, oleh karena itu hindari intervensi oleh pihak ketiga (towergroupresearchcontentglossary.jsppage1ampglossaryId383). DMA membutuhkan koneksi langsung ke tempat eksekusi. Bus perpesanan adalah perangkat lunak middleware dari vendor seperti Tibco, 29West, Reuters RMDS, atau platform open source seperti AMQP. Bus perpesanan menggunakan mekanisme yang andal untuk menyampaikan pesan. Transportasi dapat dilakukan melalui TCPIP (TibcoEMS, 29West, RMDS, dan AMQP) atau UDPmulticast (TibcoRV, 29West, dan RMDS). Salah satu konsep penting dalam distribusi pesan adalah aliran kuottik, quot yang merupakan subkumpulan data pasar yang ditentukan oleh kriteria seperti simbol ticker, industri, atau sekumpulan instrumen keuangan tertentu. Pelanggan bergabung dengan grup topik yang dipetakan ke satu atau beberapa sub-topik untuk hanya menerima informasi yang relevan. Dulu, semua pedagang menerima semua data pasar. Pada arus lalu lintas saat ini, ini akan kurang optimal. Jaringan memainkan peran penting dalam lingkungan perdagangan. Data pasar dibawa ke lantai perdagangan dimana pedagang manusia berada melalui jaringan berkecepatan tinggi Kampus atau Metro Area. Ketersediaan tinggi dan latensi rendah, serta throughput yang tinggi, adalah metrik yang paling penting. Lingkungan perdagangan berperforma tinggi memiliki sebagian besar komponennya di peternakan server Data Center. Untuk meminimalkan latency, mesin perdagangan algoritmik perlu ditempatkan di dekat penangan umpan, mesin FIX, dan sistem manajemen pesanan. Model penyebaran alternatif memiliki sistem perdagangan algoritmik yang berada pada pertukaran atau penyedia layanan dengan konektivitas cepat ke banyak pertukaran. Model Deployment Ada dua model penyebaran untuk platform perdagangan berperforma tinggi. Perusahaan dapat memilih untuk memiliki perpaduan keduanya: Pusat Data perusahaan perdagangan (Gambar 2) Ini adalah model tradisional, di mana platform perdagangan penuh dikembangkan dan dikelola oleh perusahaan dengan tautan komunikasi ke semua tempat perdagangan. Latency bervariasi dengan kecepatan link dan jumlah hop antara perusahaan dan tempat-tempat. Gambar 2 Lokakarya Model Penyebaran Tradisional di tempat perdagangan (pertukaran, penyedia layanan keuangan (FSP)) (Gambar 3) Perusahaan perdagangan menyebarkan platform perdagangan otomatis sedekat mungkin ke tempat eksekusi untuk meminimalkan latensi. Gambar 3 Hosted Deployment Model Services-Oriented Trading Architecture Kami mengusulkan kerangka kerja yang berorientasi pada layanan untuk membangun arsitektur perdagangan generasi berikutnya. Pendekatan ini memberikan kerangka konseptual dan jalur implementasi berdasarkan modularisasi dan minimisasi antar-dependensi. Kerangka kerja ini memberi perusahaan metodologi untuk: Mengevaluasi keadaan mereka saat ini dalam hal layanan Memprioritaskan layanan berdasarkan nilai mereka ke bisnis Mengembangkan platform perdagangan ke negara yang diinginkan dengan menggunakan pendekatan modular Arsitektur perdagangan berperforma tinggi bergantung pada layanan berikut, sebagai Didefinisikan oleh kerangka arsitektur layanan yang ditunjukkan pada Gambar 4. Kerangka Kerja Arsitektur Layanan untuk Layanan Pesan Ultra-Low Latency High Performance Trading Layanan ini disediakan oleh bus olahpesan, yang merupakan sistem perangkat lunak yang memecahkan masalah menghubungkan banyak-ke- Banyak aplikasi Sistem terdiri dari: Satu set skema pesan yang telah ditentukan sebelumnya Sekumpulan pesan perintah umum Infrastruktur aplikasi bersama untuk mengirim pesan ke penerima. Infrastruktur bersama dapat didasarkan pada broker pesan atau pada model penerbitan langganan. Persyaratan utama untuk bus perpesanan generasi berikutnya adalah (sumber 29West): Terendah mungkin latency (misalnya kurang dari 100 mikrodetik) Stabilitas di bawah beban berat (misalnya lebih dari 1,4 juta pesan) Kontrol dan fleksibilitas (kontrol tingkat dan transportasi yang dapat dikonfigurasi) Ada Adalah upaya di industri untuk menstandardisasi bus pesan. Advanced Message Queuing Protocol (AMQP) adalah contoh standar terbuka yang diperjuangkan oleh J.P. Morgan Chase dan didukung oleh sekelompok vendor seperti Cisco, Envoy Technologies, Red Hat, TWIST Process Innovations, Iona, 29West, dan iMatix. Dua tujuan utama adalah menyediakan jalur yang lebih sederhana untuk dioperasikan antar aplikasi yang ada pada platform dan modularitas yang berbeda sehingga middleware dapat dengan mudah berevolusi. Dalam istilah yang sangat umum, server AMQP analog dengan server E-mail dengan setiap pertukaran bertindak sebagai agen transfer pesan dan setiap antrian pesan sebagai kotak surat. Bindings menentukan tabel routing di setiap agen transfer. Penerbit mengirim pesan ke agen transfer individual, yang kemudian mengarahkan pesan ke dalam kotak pesan. Konsumen mengambil pesan dari kotak surat, yang menciptakan model yang hebat dan fleksibel yang sederhana (sumber: amqp.orgtikiwikitiki-index.phppageOpenApproachWhyAMQP). Layanan Pemantauan Latensi Persyaratan utama untuk layanan ini adalah: Kerangka pengukuran sub-milidetik Jarak dekat dengan jarak pandang nyata tanpa menambahkan latency ke lalu lintas perdagangan Kemampuan untuk membedakan latency pemrosesan aplikasi dari latency transit jaringan Kemampuan untuk menangani tingkat pesan tinggi Menyediakan antarmuka program untuk Aplikasi perdagangan untuk menerima data latensi, sehingga memungkinkan mesin perdagangan algoritmik untuk menyesuaikan diri terhadap perubahan kondisi. Mengaitkan aktivitas jaringan dengan peristiwa aplikasi untuk tujuan pemecahan masalah Latensi dapat didefinisikan sebagai selang waktu antara saat pesanan perdagangan dikirim dan kapan pesanan dan akta yang sama diakui. Oleh pihak penerima. Mengatasi masalah latensi adalah masalah yang kompleks, memerlukan pendekatan holistik yang mengidentifikasi semua sumber latensi dan menerapkan teknologi yang berbeda pada lapisan sistem yang berbeda. Gambar 5 menggambarkan berbagai komponen yang dapat mengenalkan latency pada setiap lapisan tumpukan OSI. Ini juga memetakan setiap sumber latensi dengan solusi yang mungkin dan solusi pemantauan. Pendekatan berlapis ini dapat memberi perusahaan cara yang lebih terstruktur untuk menyerang masalah latensi, dimana masing-masing komponen dapat dianggap sebagai layanan dan diperlakukan secara konsisten di seluruh perusahaan. Mempertahankan ukuran akurat dari keadaan dinamis dari interval waktu ini melintasi rute alternatif dan tujuan dapat sangat membantu dalam keputusan perdagangan taktis. Kemampuan untuk mengidentifikasi lokasi penundaan yang tepat, baik di jaringan tepi pelanggan, pusat pemrosesan pusat, atau tingkat aplikasi transaksi, secara signifikan menentukan kemampuan penyedia layanan untuk memenuhi perjanjian tingkat layanan perdagangan mereka (SLA). Untuk sisi beli dan sisi penjualan, juga untuk sindikator pasar data, identifikasi cepat dan penghapusan kemacetan diterjemahkan langsung ke dalam peluang dan pendapatan perdagangan yang disempurnakan. Gambar 5 Arsitektur Manajemen Latency Peralatan Pemrograman Cisco Latent Latency Alat pemantau jaringan tradisional beroperasi dengan granulasi menit atau detik. Platform perdagangan generasi berikutnya, terutama yang mendukung perdagangan algoritmik, memerlukan latensi kurang dari 5 ms dan tingkat packet loss yang sangat rendah. Di LAN Gigabit, microburst 100 ms dapat menyebabkan 10.000 transaksi hilang atau terlalu tertunda. Cisco menawarkan kepada pelanggannya pilihan alat untuk mengukur latensi di lingkungan perdagangan: Manajer Mutu Bandwidth (BQM) (OEM dari Corvil) Manajer Solusi Kualitas Latency Monitoring (LDS) berbasis Cisco AON Manajer Kualitas Bandwidth Quality Manager (BQM) 4.0 adalah Produk manajemen kinerja aplikasi jaringan generasi berikutnya yang memungkinkan pelanggan memantau dan menyediakan jaringan mereka untuk tingkat kinerja latensi dan kerugian yang terkontrol. Sementara BQM tidak ditargetkan secara eksklusif pada jaringan perdagangan, visibilitas mikrosekalnya dikombinasikan dengan fitur penyediaan bandwidth yang cerdas membuatnya ideal untuk lingkungan yang menuntut ini. Cisco BQM 4.0 menerapkan seperangkat pengukuran lalu lintas yang dipatenkan dan dipatenkan dengan paten dan teknologi analisis jaringan yang memberikan visibilitas dan pemahaman pengguna yang belum pernah terjadi sebelumnya tentang bagaimana mengoptimalkan jaringan untuk kinerja aplikasi maksimal. Cisco BQM sekarang didukung pada keluarga produk dari Cisco Application Deployment Engine (ADE). Keluarga produk ADE Cisco adalah platform pilihan untuk aplikasi manajemen jaringan Cisco. Manfaat BQM Visibilitas mikro Cisco BQM adalah kemampuan untuk mendeteksi, mengukur, dan menganalisis latency, jitter, dan loss yang menginduksi aktivitas lalu lintas hingga tingkat granularitas mikrodetik dengan per resolusi paket. Hal ini memungkinkan Cisco BQM untuk mendeteksi dan menentukan dampak kejadian lalu lintas pada latency jaringan, jitter, dan loss. Kritis untuk lingkungan trading adalah BQM dapat mendukung pengukuran latency, loss, dan jitter satu arah untuk lalu lintas TCP dan UDP (multicast). Ini berarti laporan lancar untuk lalu lintas perdagangan dan umpan data pasar. BQM memungkinkan pengguna untuk menentukan kumpulan ambang yang komprehensif (terhadap aktivitas microburst, latency, loss, jitter, utilisasi, dll.) Pada semua antarmuka. BQM kemudian mengoperasikan capture paket latar belakang. Kapan pun terjadi pelanggaran ambang atau kejadian penurunan kinerja potensial lainnya, hal itu memicu Cisco BQM untuk menyimpan tangkapan paket ke disk untuk analisis selanjutnya. Hal ini memungkinkan pengguna untuk memeriksa secara lengkap lalu lintas aplikasi yang terkena dampak degradasi kinerja (quotthe victimquot) dan lalu lintas yang menyebabkan penurunan kinerja (quotthe culpritsquot). Hal ini dapat secara signifikan mengurangi waktu yang dihabiskan untuk mendiagnosis dan menyelesaikan masalah kinerja jaringan. BQM juga mampu memberikan rekomendasi provisioning kebijakan bandwidth dan kualitas yang terperinci (QoS), dimana pengguna dapat langsung menerapkannya untuk mencapai kinerja jaringan yang diinginkan. Pengukuran BQM Ilustrasi Untuk memahami perbedaan antara beberapa teknik pengukuran yang lebih konvensional dan visibilitas yang diberikan oleh BQM, kita dapat melihat beberapa grafik perbandingan. Pada grafik pertama (Gambar 6 dan Gambar 7), kita melihat perbedaan antara latency yang diukur oleh MQS Passive Network Quality Monitor (PNQM) dan latency yang diukur dengan menyuntikkan paket ping setiap 1 detik ke arus lalu lintas. Pada Gambar 6. kita melihat latensi yang dilaporkan oleh paket ping ICMP 1 detik untuk lalu lintas jaringan sesungguhnya (dibagi dengan 2 untuk memberikan perkiraan penundaan satu arah). Ini menunjukkan penundaan dengan nyaman di bawah sekitar 5ms hampir sepanjang waktu. Gambar 6 Latency Dilaporkan oleh Paket Ping ICMP 1-Kedua untuk Lalu Lintas Jaringan Real Pada Gambar 7. kita melihat latensi yang dilaporkan oleh PNQM untuk lalu lintas yang sama pada waktu yang sama. Di sini kita melihat bahwa dengan mengukur latency satu arah dari paket aplikasi yang sebenarnya, kita mendapatkan gambaran yang sangat berbeda. Di sini latency terlihat melayang sekitar 20 ms, dengan semburan sesekali jauh lebih tinggi. Penjelasannya adalah karena ping hanya mengirim paket hanya setiap detik, itu benar-benar kehilangan sebagian besar latensi lalu lintas aplikasi. Sebenarnya, hasil ping biasanya hanya mengindikasikan delay propagasi round trip daripada latensi aplikasi yang realistis di seluruh jaringan. Gambar 7 Latency yang Dilaporkan oleh PNQM untuk Lalu Lintas Jaringan Nyata Pada contoh kedua (Gambar 8), kita melihat perbedaan dalam pemuatan beban atau tingkat kejenuhan yang dilaporkan antara tampilan rata-rata 5 menit dan tampilan mikroburst 5 ms (BQM dapat melaporkan microbursts down Sekitar 10-100 nanosecond akurasi). Garis hijau menunjukkan rata-rata utilisasi rata-rata 5 menit menjadi rendah, mungkin sampai 5 Mbits. Plot biru tua menunjukkan aktivitas microburst 5ms yang mencapai antara 75 Mbits dan 100 Mbitss, kecepatan LAN efektif. BQM menunjukkan tingkat granularitas ini untuk semua aplikasi dan juga memberikan aturan provisioning yang jelas untuk memungkinkan pengguna mengendalikan atau menetralisir microburst ini. Gambar 8 Perbedaan Beban Pemuatan yang Dilaporkan Antara Tampilan Rata-rata 5 Menit dan Penerapan Microburst View BQM 5 Miliar di Jaringan Perdagangan Gambar 9 menunjukkan penyebaran BQM yang khas dalam jaringan perdagangan. Gambar 9 Penerapan BQM Khas dalam Jaringan Perdagangan BQM kemudian dapat digunakan untuk menjawab jenis pertanyaan ini: Apakah ada tautan inti Gigabit LAN saya yang jenuh lebih dari X milidetik Apakah ini menyebabkan kerugian Tautan mana yang paling diuntungkan dari peningkatan versi ke Etherchannel atau 10 Gigabit kecepatan Apa lalu lintas aplikasi yang menyebabkan saturasi 1 link Gigabit saya Apakah ada data pasar yang mengalami kerugian end-to-end Berapa banyak latensi tambahan yang dimiliki oleh failover data center experience Apakah tautan ini berukuran benar untuk menangani microburst Apakah para pedagang saya Mendapatkan update latency rendah dari lapisan distribusi data pasar Apakah mereka melihat penundaan lebih besar dari X milidetik Mampu menjawab pertanyaan-pertanyaan ini secara sederhana dan efektif menghemat waktu dan uang dalam menjalankan jaringan perdagangan. BQM adalah alat penting untuk mendapatkan visibilitas di data pasar dan lingkungan perdagangan. Ini menyediakan pengukuran latensi end-to-end granular di infrastruktur kompleks yang mengalami pergerakan data bervolume tinggi. Mendeteksi ledakan mikroba secara efektif pada tingkat sub-milidetik dan menerima analisis ahli pada peristiwa tertentu sangat berharga bagi arsitek lantai perdagangan. Rekomendasi penyediaan bandwidth yang cerdas, seperti ukuran dan analisis bagaimana jika memberikan kelincahan yang lebih besar untuk merespons kondisi pasar yang mudah berubah. Seiring dengan meledaknya perdagangan algoritmik dan meningkatnya tingkat pesan, BQM, dikombinasikan dengan alat QoS-nya, memberikan kemampuan untuk menerapkan kebijakan QoS yang dapat melindungi aplikasi perdagangan penting. Solusi Pemantauan Latency Cisco Financial Services Cisco dan Trading Metrics telah berkolaborasi dalam solusi pemantauan latency untuk aliran pesanan FIX dan pemantauan data pasar. Teknologi Cisco AON adalah fondasi untuk kelas baru produk dan solusi tertanam jaringan yang membantu menggabungkan jaringan cerdas dengan infrastruktur aplikasi, berdasarkan arsitektur berorientasi layanan atau tradisional. Trading Metrics adalah penyedia perangkat lunak analisis untuk infrastruktur jaringan dan tujuan pemantauan latensi aplikasi (tradingmetrics). Solusi Pemantauan Latency Cisco AON Financial Services (FSMS) menghubungkan dua jenis peristiwa pada titik pengamatan: Peristiwa jaringan berkorelasi langsung dengan penanganan pesan aplikasi bersamaan Arus pesanan perdagangan dan peristiwa pembaruan pasar yang serasi Menggunakan stempel waktu yang diajukan pada titik penangkapan di Jaringan, analisis real-time dari aliran data berkorelasi ini memungkinkan identifikasi yang tepat mengenai kemacetan di seluruh infrastruktur saat perdagangan dijalankan atau data pasar didistribusikan. Dengan memantau dan mengukur latensi pada awal siklus, perusahaan keuangan dapat membuat keputusan yang lebih baik mengenai layanan jaringan dan mana perantara, pasar, atau mitra kerja untuk memilih pesanan perdagangan pesanan. Demikian juga, pengetahuan ini memungkinkan akses yang lebih efisien ke data pasar yang diperbarui (harga saham, berita ekonomi, dll.), Yang merupakan dasar penting untuk memulai, menarik diri dari, atau mengejar peluang pasar. Komponen solusinya adalah: Perangkat keras AON dalam tiga faktor bentuk: Modul Jaringan AON untuk router Cisco A00000000000000 untuk perangkat Cisco Catalyst 6500 series AON 8340 Appliance Trading Metrics MampA 2.0, yang menyediakan aplikasi pemantauan dan peringatan, menampilkan grafik latensi pada Sebuah dasbor, dan mengeluarkan peringatan saat terjadi kemunduran (tradingmetricsTMbrochure.pdf). Gambar 10 Pemantauan Latency FIX Berbasis AON Cisco IP SLA Cisco IP SLA adalah alat manajemen jaringan tertanam di Cisco IOS yang memungkinkan router dan switch untuk menghasilkan arus lalu lintas sintetis yang dapat diukur untuk latency, jitter, packet loss, dan kriteria lainnya (ciscogoipsla ). Dua konsep utama adalah sumber lalu lintas dan target yang dihasilkan. Keduanya menjalankan kuotasi IP SLA, quot yang memiliki tanggung jawab untuk menghitung arus lalu lintas kontrol sebelum bersumber dan dikembalikan oleh target (untuk pengukuran perjalanan pulang-pergi). Berbagai jenis lalu lintas dapat bersumber dalam IP SLA dan ditujukan pada berbagai metrik dan menargetkan berbagai layanan dan aplikasi. Operasi jitter UDP digunakan untuk mengukur delay satu arah dan round-trip dan variasi laporan. Karena lalu lintas adalah waktu yang dicap pada perangkat pengirim dan target yang menggunakan kemampuan responder, penundaan perjalanan pulang-pergi ditandai sebagai delta antara dua cap waktu. Fitur baru diperkenalkan di IOS 12.3 (14) T, Pelaporan Lapis Segmen IP SLA, yang memungkinkan cap waktu ditampilkan dengan resolusi dalam mikrodetik, sehingga memberikan tingkat granularitas yang sebelumnya tidak tersedia. Fitur baru ini sekarang membuat IP SLA relevan dengan jaringan kampus dimana latency jaringan biasanya berada pada kisaran 300-800 mikrodetik dan kemampuan untuk mendeteksi tren dan lonjakan (tren singkat) berdasarkan penghitung granularitas mikrodetik adalah persyaratan bagi pelanggan yang terlibat dalam waktu. -sensitif perdagangan elektronik lingkungan. Akibatnya, SLA IP sekarang dipertimbangkan oleh sejumlah besar organisasi keuangan karena semuanya menghadapi persyaratan untuk: Melaporkan latensi awal kepada pengguna mereka Melemahkan latensi dasar dari waktu ke waktu Menanggapi dengan cepat semburan lalu lintas yang menyebabkan perubahan pada latensi yang dilaporkan Sub- Laporan milisecond diperlukan untuk pelanggan ini, karena banyak kampus dan tulang punggung saat ini memberikan latency kedua di beberapa hop switch. Lingkungan perdagangan elektronik pada umumnya bekerja untuk menghilangkan atau meminimalkan semua area perangkat dan latensi jaringan untuk memberikan pemenuhan pesanan yang cepat terhadap bisnis. Melaporkan bahwa waktu respons jaringan hanya di bawah satu milidetik yang tidak lagi memadai, granularitas pengukuran latensi yang dilaporkan di segmen jaringan atau tulang punggung harus mendekati 300-800 detik mikro dengan tingkat resolusi 100 detik igrave. IP SLA baru-baru ini menambahkan dukungan untuk aliran uji multicast IP, yang dapat mengukur latency data pasar. Topologi jaringan yang khas ditunjukkan pada Gambar 11 dengan router bayangan IP, sumber, dan responden IP SLA. Gambar 11 Layanan Komputasi Penyebaran IP SLA Layanan komputasi mencakup berbagai teknologi dengan tujuan menghilangkan kemacetan memori dan CPU yang dibuat oleh pemrosesan paket jaringan. Aplikasi perdagangan mengkonsumsi volume data pasar yang tinggi dan server harus mendedikasikan sumber daya untuk memproses lalu lintas jaringan alih-alih pemrosesan aplikasi. Pengolahan transportasi Dengan kecepatan tinggi, pemrosesan paket jaringan dapat mengkonsumsi sejumlah besar siklus CPU server dan memori. Aturan praktis yang ditetapkan menyatakan bahwa bandwidth jaringan 1Gbps memerlukan 1 GHz kapasitas prosesor (sumber kertas putih Intel pada percepatan IO inteltechnologyioacceleration306517.pdf). Penyisipan penyangga menengah Dalam implementasi stack jaringan konvensional, data perlu disalin oleh CPU antara buffer jaringan dan buffer aplikasi. Overhead ini diperparah oleh fakta bahwa kecepatan memori tidak bertahan dengan peningkatan kecepatan CPU. Sebagai contoh, prosesor seperti Intel Xeon mendekati 4 GHz, sementara chip RAM berkisar sekitar 400MHz (untuk memori DDR 3200) (sumber Intel inteltechnologyioacceleration306517.pdf). Peralihan konteks Setiap saat paket individual perlu diproses, CPU melakukan peralihan konteks dari konteks aplikasi ke konteks lalu lintas jaringan. Overhead ini bisa dikurangi jika switch hanya terjadi bila seluruh buffer aplikasi selesai. Gambar 12 Sumber Overhead di Server Data Center TCP Offload Engine (TOE) Siklus prosesor pengangkut muatan ke NIC. Memindahkan protokol TCPIP menumpuk salinan buffer dari memori sistem ke memori NIC. Remote Direct Memory Access (RDMA) Memungkinkan adapter jaringan untuk mentransfer data secara langsung dari aplikasi ke aplikasi tanpa melibatkan sistem operasi. Menghilangkan salinan penyangga intermediate dan aplikasi (konsumsi bandwidth memori). Kernel bypass Akses langsung tingkat pengguna ke perangkat keras. Secara dramatis mengurangi switch konteks aplikasi. Gambar 13 RDMA dan Kernel Bypass InfiniBand adalah link komunikasi serial bidirectional point-to-point (switched fabric) yang mengimplementasikan RDMA, di antara fitur lainnya. Cisco menawarkan switch InfiniBand, Server Fabric Switch (SFS): ciscoapplicationpdfenusguestnetsolns500c643cdccont0900aecd804c35cb.pdf. Gambar 14 Aplikasi SFS Deployment Trading tipikal mendapatkan keuntungan dari pengurangan variabilitas latency dan latency, seperti yang dibuktikan dengan tes yang dilakukan dengan Penangan Feed SFS dan Wombat oleh Stac Research: Layanan Virtualisasi Aplikasi Menggabungkan aplikasi dari perangkat keras OS dan server yang mendasarinya. Memungkinkan mereka untuk menjalankan layanan jaringan. Satu aplikasi dapat dijalankan secara paralel di beberapa server, atau beberapa aplikasi dapat dijalankan di server yang sama, karena alokasi sumber daya terbaik ditentukan. Desoupling ini memungkinkan penyeimbangan beban dan pemulihan bencana yang lebih baik untuk strategi kelanjutan bisnis. Proses mengalokasikan ulang sumber daya komputasi ke aplikasi bersifat dinamis. Dengan menggunakan sistem virtualisasi aplikasi seperti Data Synapses GridServer, aplikasi dapat bermigrasi, menggunakan kebijakan yang telah dikonfigurasi sebelumnya, ke server yang kurang dimanfaatkan dalam proses penawaran-permintaan-permintaan (wwwworkworldsupp2005ndc1022105virtual.htmlpage2). Ada banyak keuntungan bisnis bagi perusahaan keuangan yang menerapkan virtualisasi aplikasi: Waktu lebih cepat untuk memasarkan produk dan layanan baru Integrasi perusahaan yang lebih cepat setelah aktivitas merger dan akuisisi Meningkatkan ketersediaan aplikasi Distribusi beban kerja yang lebih baik, yang menciptakan ruang kerja quothead roomquot untuk memproses lonjakan volume perdagangan Operasional Efisiensi dan kontrol Pengurangan dalam kompleksitas TI Saat ini, virtualisasi aplikasi tidak digunakan di kantor depan perdagangan. Salah satu use-case adalah pemodelan risiko, seperti simulasi Monte Carlo. Seiring perkembangan teknologi, dapat dibayangkan bahwa beberapa platform perdagangan akan menerapkannya. Layanan Virtualisasi Data Untuk berbagi sumber daya secara efektif di seluruh aplikasi perusahaan terdistribusi, perusahaan harus dapat memanfaatkan data di berbagai sumber secara real-time sekaligus memastikan integritas data. Dengan solusi dari vendor perangkat lunak virtualisasi data seperti Gemstone atau Tangosol (sekarang Oracle), perusahaan keuangan dapat mengakses sumber data heterogen sebagai citra sistem tunggal yang memungkinkan konektivitas antara proses bisnis dan akses aplikasi tak terkendali ke cache terdistribusi. Hasil akhirnya adalah semua pengguna memiliki akses instan ke sumber data ini di jaringan terdistribusi (gridtoday030210101061.html). Ini disebut grid data dan merupakan langkah pertama dalam proses pembuatan Gartner yang disebut Extreme Transaction Processing (XTP) (gartnerDisplayDocumentrefgsearchampid500947). Teknologi seperti virtualisasi data dan aplikasi memungkinkan perusahaan keuangan melakukan analisis kompleks real-time, aplikasi berbasis event, dan alokasi sumber daya dinamis. One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. The biggest challenge for the application is scalability over WAN connectivity because it has to maintain state. Todays data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limitsthey have been developed without the WAN in mind. Figure 15 GemStone GemFire Distributed Caching Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster. Multicast Service Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth. IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two serversone primary and one backup for redundancy. There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerages network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center. The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization. Figure 16 Market Data Distribution Players The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO.last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco. A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders. Figure 17 Market Data Architecture Design Issues Number of GroupsChannels to Use Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message. In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returnsthere is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets. Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically: This approach allows for straight forward networkapplication management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user. In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient. Intermittent Sources A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created. PIM-Bidir or PIM-SSM The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications. Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path. Intermittent sources are not an issue with PIM-Bidir and PIM-SSM. Null Packets In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems. Periodic Keepalives or Heartbeats An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires. S,G Expiry Timer Finally, Cisco has made a modification to the operation of the S,G expiry timer in IOS. There is now a CLI knob to allow the state for a S,G to stay alive for hours without any traffic being sent. The (S,G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed. RTCP Feedback A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided. Fast Producers and Slow Consumers Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle. The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application. Tibco Heartbeats TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions. Multicast Forwarding Options PIM Sparse Mode The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage. There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections. The main components of the PIM-SM implementation are: PIM Sparse Mode v2 Shared Tree (spt-threshold infinity) A design option in the brokerage or in the exchange.sirengus nam, ar i darb eigoje, danai mintys pradeda suktis apie kiemo aplink. Keletas landafto architekts patarim kaip aplink susiplanuoti patiems. Prie pradedant galvoti apie glynus arba alpinariumus, svarbiausia yra pirmi ingsniai tai funkcinis teritorijos planavimas. Nesuskirsius teritorijos tinkamas zonas, augalai pasodinami sepuluh, kur j visai nereikia, ar iltnamis pastatomas toje vietoje, kur jis Skaityti daugiau. Tel. 370 608 16327 El.p. Infoskraidantikamera.lt Interneto svetain: skraidantikamera.lt Socialiniai tinklai: facebook paskyra Apraymas: Filmuojame 8211 fotografuojame i 70 8211 100 metr aukio naudojant dron. Sukuriame HD raikos nuotraukas ir video siuetus. Silome pasli, sod, mik, medelyn apiros nuotraukas i aukio. Daugiau ms darb pavyzdi rasite interneto Skaityti daugiau. Teknik Informatika, sodo arnos (gera kaina) PVC laistymo arnos: PVC, dviej sluoksni laistymo arna, sutvirtinta tinkleliu i poliesterio sil atspari ultravioletiniams spinduliams kokybs sertifikatas spalva alia 58 skersmens, 16 mm, 8211 kaina 0.90 Ltm 34 skersmens, 19 mm. 8211 kaina 1.20 Ltm 1 col. Skermens, 25 mm, 8211 kaina 2.30 Ltm Profesionalios PVC auktos kokybs Skaityti daugiau. The LMAX Architecture Over the last few years we keep hearing that the free lunch is over1 - we cant expect increases in individual CPU speed. So to write fast code we need to explicitly use multiple processors with concurrent software. This is not good news - writing concurrent code is very hard. Locks and semaphores are hard to reason about and hard to test - meaning we are spending more time worrying about satisfying the computer than we are solving the domain problem. Various concurrency models, such as Actors and Software Transactional Memory, aim to make this easier - but there is still a burden that introduces bugs and complexity. So I was fascinated to hear about a talk at QCon London in March last year from LMAX. LMAX is a new retail financial trading platform. Its business innovation is that it is a retail platform - allowing anyone to trade in a range of financial derivative products2. A trading platform like this needs very low latency - trades have to be processed quickly because the market is moving rapidly. A retail platform adds complexity because it has to do this for lots of people. So the result is more users, with lots of trades, all of which need to be processed quickly.3 Given the shift to multi-core thinking, this kind of demanding performance would naturally suggest an explicitly concurrent programming model - and indeed this was their starting point. But the thing that got peoples attention at QCon was that this wasnt where they ended up. In fact they ended up by doing all the business logic for their platform: all trades, from all customers, in all markets - on a single thread. A thread that will process 6 million orders per second using commodity hardware.4 Processing lots of transactions with low-latency and none of the complexities of concurrent code - how can I resist digging into that Fortunately another difference LMAX has to other financial companies is that they are quite happy to talk about their technological decisions. So now LMAX has been in production for a while its time to explore their fascinating design. Overall Structure Figure 1: LMAXs architecture in three blobs At a top level, the architecture has three parts business logic processor5 input disruptor output disruptors As its name implies, the business logic processor handles all the business logic in the application. As I indicated above, it does this as a single-threaded java program which reacts to method calls and produces output events. Consequently its a simple java program that doesnt require any platform frameworks to run other than the JVM itself, which allows it to be easily run in test environments. Although the Business Logic Processor can run in a simple environment for testing, there is rather more involved choreography to get it to run in a production setting. Input messages need to be taken off a network gateway and unmarshaled, replicated and journaled. Output messages need to be marshaled for the network. These tasks are handled by the input and output disruptors. Unlike the Business Logic Processor, these are concurrent components, since they involve IO operations which are both slow and independent. They were designed and built especially for LMAX, but they (like the overall architecture) are applicable elsewhere. Business Logic Processor Keeping it all in memory The Business Logic Processor takes input messages sequentially (in the form of a method invocation), runs business logic on it, and emits output events. It operates entirely in-memory, there is no database or other persistent store. Keeping all data in-memory has two important benefits. Firstly its fast - theres no database to provide slow IO to access, nor is there any transactional behavior to execute since all the processing is done sequentially. The second advantage is that it simplifies programming - theres no objectrelational mapping to do. All the code can be written using Javas object model without having to make any compromises for the mapping to a database. Using an in-memory structure has an important consequence - what happens if everything crashes Even the most resilient systems are vulnerable to someone pulling the power. The heart of dealing with this is Event Sourcing - which means that the current state of the Business Logic Processor is entirely derivable by processing the input events. As long as the input event stream is kept in a durable store (which is one of the jobs of the input disruptor) you can always recreate the current state of the business logic engine by replaying the events. A good way to understand this is to think of a version control system. Version control systems are a sequence of commits, at any time you can build a working copy by applying those commits. VCSs are more complicated than the Business Logic Processor because they must support branching, while the Business Logic Processor is a simple sequence. So, in theory, you can always rebuild the state of the Business Logic Processor by reprocessing all the events. In practice, however, that would take too long should you need to spin one up. So, just as with version control systems, LMAX can make snapshots of the Business Logic Processor state and restore from the snapshots. They take a snapshot every night during periods of low activity. Restarting the Business Logic Processor is fast, a full restart - including restarting the JVM, loading a recent snapshot, and replaying a days worth of journals - takes less than a minute. Snapshots make starting up a new Business Logic Processor faster, but not quickly enough should a Business Logic Processor crash at 2pm. As a result LMAX keeps multiple Business Logic Processors running all the time6. Each input event is processed by multiple processors, but all but one processor has its output ignored. Should the live processor fail, the system switches to another one. This ability to handle fail-over is another benefit of using Event Sourcing. By event sourcing into replicas they can switch between processors in a matter of micro-seconds. As well as taking snapshots every night, they also restart the Business Logic Processors every night. The replication allows them to do this with no downtime, so they continue to process trades 247. For more background on Event Sourcing, see the draft pattern on my site from a few years ago. The article is more focused on handling temporal relationships rather than the benefits that LMAX use, but it does explain the core idea. Event Sourcing is valuable because it allows the processor to run entirely in-memory, but it has another considerable advantage for diagnostics. If some unexpected behavior occurs, the team copies the sequence of events to their development environment and replays them there. This allows them to examine what happened much more easily than is possible in most environments. This diagnostic capability extends to business diagnostics. There are some business tasks, such as in risk management, that require significant computation that isnt needed for processing orders. An example is getting a list of the top 20 customers by risk profile based on their current trading positions. The team handles this by spinning up a replicate domain model and carrying out the computation there, where it wont interfere with the core order processing. These analysis domain models can have variant data models, keep different data sets in memory, and run on different machines. Tuning performance So far Ive explained that the key to the speed of the Business Logic Processor is doing everything sequentially, in-memory. Just doing this (and nothing really stupid) allows developers to write code that can process 10K TPS7. They then found that concentrating on the simple elements of good code could bring this up into the 100K TPS range. This just needs well-factored code and small methods - essentially this allows Hotspot to do a better job of optimizing and for CPUs to be more efficient in caching the code as its running. It took a bit more cleverness to go up another order of magnitude. There are several things that the LMAX team found helpful to get there. One was to write custom implementations of the java collections that were designed to be cache-friendly and careful with garbage8. An example of this is using primitive java longs as hashmap keys with a specially written array backed Map implementation ( LongToObjectHashMap ). In general theyve found that choice of data structures often makes a big difference, Most programmers just grab whatever List they used last time rather than thinking which implementation is the right one for this context.9 Another technique to reach that top level of performance is putting attention into performance testing. Ive long noticed that people talk a lot about techniques to improve performance, but the one thing that really makes a difference is to test it. Even good programmers are very good at constructing performance arguments that end up being wrong, so the best programmers prefer profilers and test cases to speculation.10 The LMAX team has also found that writing tests first is a very effective discipline for performance tests. Programming Model This style of processing does introduce some constraints into the way you write and organize the business logic. The first of these is that you have to tease out any interaction with external services. An external service call is going to be slow, and with a single thread will halt the entire order processing machine. As a result you cant make calls to external services within the business logic. Instead you need to finish that interaction with an output event, and wait for another input event to pick it back up again. Ill use a simple non-LMAX example to illustrate. Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldnt be very long for the user, and the server can always run another thread on the processor while its waiting. In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order. Working in this kind of event-driven, asynchronous style, is somewhat unusual - although using asynchrony to improve the responsiveness of an application is a familiar technique. It also helps the business process be more resilient, as you have to be more explicit in thinking about the different things that can happen with the remote application. A second feature of the programming model lies in error handling. The traditional model of sessions and database transactions provides a helpful error handling capability. Should anything go wrong, its easy to throw away everything that happened so far in the interaction. Session data is transient, and can be discarded, at the cost of some irritation to the user if in the middle of something complicated. If an error occurs on the database side you can rollback the transaction. LMAXs in-memory structures are persistent across input events, so if there is an error its important to not leave that memory in an inconsistent state. However theres no automated rollback facility. As a consequence the LMAX team puts a lot of attention into ensuring the input events are fully valid before doing any mutation of the in-memory persistent state. They have found that testing is a key tool in flushing out these kinds of problems before going into production. Input and Output Disruptors Although the business logic occurs in a single thread, there are a number tasks to be done before we can invoke a business object method. The original input for processing comes off the wire in the form of a message, this message needs to be unmarshaled into a form convenient for Business Logic Processor to use. Event Sourcing relies on keeping a durable journal of all the input events, so each input message needs to be journaled onto a durable store. Finally the architecture relies on a cluster of Business Logic Processors, so we have to replicate the input messages across this cluster. Similarly on the output side, the output events need to be marshaled for transmission over the network. Figure 2: The activities done by the input disruptor (using UML activity diagram notation) The replicator and journaler involve IO and therefore are relatively slow. After all the central idea of Business Logic Processor is that it avoids doing any IO. Also these three tasks are relatively independent, all of them need to be done before the Business Logic Processor works on a message, but they can done in any order. So unlike with the Business Logic Processor, where each trade changes the market for subsequent trades, there is a natural fit for concurrency. To handle this concurrency the LMAX team developed a special concurrency component, which they call a Disruptor 11 . The LMAX team have released the source code for the Disruptor with an open source licence. At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent to all the consumers for parallel consumption through separate downstream queues. When you look inside you see that this network of queues is really a single data structure - a ring buffer. Each producer and consumer has a sequence counter to indicate which slot in the buffer its currently working on. Each producerconsumer writes its own sequence counter but can read the others sequence counters. This way the producer can read the consumers counters to ensure the slot it wants to write in is available without any locks on the counters. Similarly a consumer can ensure it only processes messages once another consumer is done with it by watching the counters. Figure 3: The input disruptor coordinates one producer and four consumers Output disruptors are similar but they only have two sequential consumers for marshaling and output.12 Output events are organized into several topics, so that messages can be sent to only the receivers who are interested in them. Each topic has its own disruptor. The disruptors Ive described are used in a style with one producer and multiple consumers, but this isnt a limitation of the design of the disruptor. The disruptor can work with multiple producers too, in this case it still doesnt need locks.13 A benefit of the disruptor design is that it makes it easier for consumers to catch up quickly if they run into a problem and fall behind. If the unmarshaler has a problem when processing on slot 15 and returns when the receiver is on slot 31, it can read data from slots 16-30 in one batch to catch up. This batch read of the data from the disruptor makes it easier for lagging consumers to catch up quickly, thus reducing overall latency. Ive described things here, with one each of the journaler, replicator, and unmarshaler - this indeed is what LMAX does. But the design would allow multiple of these components to run. If you ran two journalers then one would take the even slots and the other journaler would take the odd slots. This allows further concurrency of these IO operations should this become necessary. The ring buffers are large: 20 million slots for input buffer and 4 million slots for each of the output buffers. The sequence counters are 64bit long integers that increase monotonically even as the ring slots wrap.14 The buffer is set to a size thats a power of two so the compiler can do an efficient modulus operation to map from the sequence counter number to the slot number. Like the rest of the system, the disruptors are bounced overnight. This bounce is mainly done to wipe memory so that there is less chance of an expensive garbage collection event during trading. (I also think its a good habit to regularly restart, so that you rehearse how to do it for emergencies.) The journalers job is to store all the events in a durable form, so that they can be replayed should anything go wrong. LMAX does not use a database for this, just the file system. They stream the events onto the disk. In modern terms, mechanical disks are horribly slow for random access, but very fast for streaming - hence the tag-line disk is the new tape.15 Earlier on I mentioned that LMAX runs multiple copies of its system in a cluster to support rapid failover. The replicator keeps these nodes in sync. All communication in LMAX uses IP multicasting, so clients dont need to know which IP address is the master node. Only the master node listens directly to input events and runs a replicator. The replicator broadcasts the input events to the slave nodes. Should the master node go down, its lack of heartbeat will be noticed, another node becomes master, starts processing input events, and starts its replicator. Each node has its own input disruptor and thus has its own journal and does its own unmarshaling. Even with IP multicasting, replication is still needed because IP messages can arrive in a different order on different nodes. The master node provides a deterministic sequence for the rest of the processing. The unmarshaler turns the event data from the wire into a java object that can be used to invoke behavior on the Business Logic Processor. Therefore, unlike the other consumers, it needs to modify the data in the ring buffer so it can store this unmarshaled object. The rule here is that consumers are permitted to write to the ring buffer, but each writable field can only have one parallel consumer thats allowed to write to it. This preserves the principle of only having a single writer. 16 Figure 4: The LMAX architecture with the disruptors expanded The disruptor is a general purpose component that can be used outside of the LMAX system. Usually financial companies are very secretive about their systems, keeping quiet even about items that arent germane to their business. Not just has LMAX been open about its overall architecture, they have open-sourced the disruptor code - an act that makes me very happy. Not just will this allow other organizations to make use of the disruptor, it will also allow for more testing of its concurrency properties. Queues and their lack of mechanical sympathy The LMAX architecture caught peoples attention because its a very different way of approaching a high performance system to what most people are thinking about. So far Ive talked about how it works, but havent delved too much into why it was developed this way. This tale is interesting in itself, because this architecture didnt just appear. It took a long time of trying more conventional alternatives, and realizing where they were flawed, before the team settled on this one. Most business systems these days have a core architecture that relies on multiple active sessions coordinated through a transactional database. The LMAX team were familiar with this approach, and confident that it wouldnt work for LMAX. This assessment was founded in the experiences of Betfair - the parent company who set up LMAX. Betfair is a betting site that allows people to bet on sporting events. It handles very high volumes of traffic with a lot of contention - sports bets tend to burst around particular events. To make this work they have one of the hottest database installations around and have had to do many unnatural acts in order to make it work. Based on this experience they knew how difficult it was to maintain Betfairs performance and were sure that this kind of architecture would not work for the very low latency that a trading site would require. As a result they had to find a different approach. Their initial approach was to follow what so many are saying these days - that to get high performance you need to use explicit concurrency. For this scenario, this means allowing orders to be processed by multiple threads in parallel. However, as is often the case with concurrency, the difficulty comes because these threads have to communicate with each other. Processing an order changes market conditions and these conditions need to be communicated. The approach they explored early on was the Actor model and its cousin SEDA. The Actor model relies on independent, active objects with their own thread that communicate with each other via queues. Many people find this kind of concurrency model much easier to deal with than trying to do something based on locking primitives. The team built a prototype exchange using the actor model and did performance tests on it. What they found was that the processors spent more time managing queues than doing the real logic of the application. Queue access was a bottleneck. When pushing performance like this, it starts to become important to take account of the way modern hardware is constructed. The phrase Martin Thompson likes to use is mechanical sympathy. The term comes from race car driving and it reflects the driver having an innate feel for the car, so they are able to feel how to get the best out of it. Many programmers, and I confess I fall into this camp, dont have much mechanical sympathy for how programming interacts with hardware. Whats worse is that many programmers think they have mechanical sympathy, but its built on notions of how hardware used to work that are now many years out of date. One of the dominant factors with modern CPUs that affects latency, is how the CPU interacts with memory. These days going to main memory is a very slow operation in CPU-terms. CPUs have multiple levels of cache, each of which of is significantly faster. So to increase speed you want to get your code and data in those caches. At one level, the actor model helps here. You can think of an actor as its own object that clusters code and data, which is a natural unit for caching. But actors need to communicate, which they do through queues - and the LMAX team observed that its the queues that interfere with caching. The explanation runs like this: in order to put some data on a queue, you need to write to that queue. Similarly, to take data off the queue, you need to write to the queue to perform the removal. This is write contention - more than one client may need to write to the same data structure. To deal with the write contention a queue often uses locks. But if a lock is used, that can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches. The conclusion they came to was that to get the best caching behavior, you need a design that has only one core writing to any memory location17. Multiple readers are fine, processors often use special high-speed links between their caches. But queues fail the one-writer principle. This analysis led the LMAX team to a couple of conclusions. Firstly it led to the design of the disruptor, which determinedly follows the single-writer constraint. Secondly it led to idea of exploring the single-threaded business logic approach, asking the question of how fast a single thread can go if its freed of concurrency management. The essence of working on a single thread, is to ensure that you have one thread running on one core, the caches warm up, and as much memory access as possible goes to the caches rather than to main memory. This means that both the code and the working set of data needs to be as consistently accessed as possible. Also keeping small objects with code and data together allows them to be swapped between the caches as a unit, simplifying the cache management and again improving performance. An essential part of the path to the LMAX architecture was the use of performance testing. The consideration and abandonment of an actor-based approach came from building and performance testing a prototype. Similarly much of the steps in improving the performance of the various components were enabled by performance tests. Mechanical sympathy is very valuable - it helps to form hypotheses about what improvements you can make, and guides you to forward steps rather than backward ones - but in the end its the testing gives you the convincing evidence. Performance testing in this style, however, is not a well-understood topic. Regularly the LMAX team stresses that coming up with meaningful performance tests is often harder than developing the production code. Again mechanical sympathy is important to developing the right tests. Testing a low level concurrency component is meaningless unless you take into account the caching behavior of the CPU. One particular lesson is the importance of writing tests against null components to ensure the performance test is fast enough to really measure what real components are doing. Writing fast test code is no easier than writing fast production code and its too easy to get false results because the test isnt as fast as the component its trying to measure. Should you use this architecture At first glance, this architecture appears to be for a very small niche. After all the driver that led to it was to be able to run lots of complex transactions with very low latency - most applications dont need to run at 6 million TPS. But the thing that fascinates me about this application, is that they have ended up with a design which removes much of the programming complexity that plagues many software projects. The traditional model of concurrent sessions surrounding a transactional database isnt free of hassles. Theres usually a non-trivial effort that goes into the relationship with the database. Objectrelational mapping tools can help much of the pain of dealing with a database, but it doesnt deal with it all. Most performance tuning of enterprise applications involves futzing around with SQL. These days, you can get more main memory into your servers than us old guys could get as disk space. More and more applications are quite capable of putting all their working set in main memory - thus eliminating a source of both complexity and sluggishness. Event Sourcing provides a way to solve the durability problem for an in-memory system, running everything in a single thread solves the concurrency issue. The LMAX experience suggests that as long as you need less than a few million TPS, youll have enough performance headroom. There is a considerable overlap here with the growing interest in CQRS. An event sourced, in-memory processor is a natural choice for the command-side of a CQRS system. (Although the LMAX team does not currently use CQRS.) So what indicates you shouldnt go down this path This is always a tricky questions for little-known techniques like this, since the profession needs more time to explore its boundaries. A starting point, however, is to think of the characteristics that encourage the architecture. One characteristic is that this is a connected domain where processing one transaction always has the potential to change how following ones are processed. With transactions that are more independent of each other, theres less need to coordinate, so using separate processors running in parallel becomes more attractive. LMAX concentrates on figuring the consequences of how events change the world. Many sites are more about taking an existing store of information and rendering various combinations of that information to as many eyeballs as they can find - eg think of any media site. Here the architectural challenge often centers on getting your caches right. Another characteristic of LMAX is that this is a backend system, so its reasonable to consider how applicable it would be for something acting in an interactive mode. Increasingly web application are helping us get used to server systems that react to requests, an aspect that does fit in well with this architecture. Where this architecture goes further than most such systems is its absolute use of asynchronous communications, resulting in the changes to the programming model that I outlined earlier. These changes will take some getting used to for most teams. Most people tend to think of programming in synchronous terms and are not used to dealing with asynchrony. Yet its long been true that asynchronous communication is an essential tool for responsiveness. It will be interesting to see if the wider use of asynchronous communication in the javascript world, with AJAX and node.js, will encourage more people to investigate this style. The LMAX team found that while it took a bit of time to adjust to asynchronous style, it soon became natural and often easier. In particular error handling was much easier to deal with under this approach. The LMAX team certainly feels that the days of the coordinating transactional database are numbered. The fact that you can write software more easily using this kind of architecture and that it runs more quickly removes much of the justification for the traditional central database. For my part, I find this a very exciting story. Much of my goal is to concentrate on software that models complex domains. An architecture like this provides good separation of concerns, allowing people to focus on Domain-Driven Design and keeping much of the platform complexity well separated. The close coupling between domain objects and databases has always been an irritation - approaches like this suggest a way out. if you found this article useful, please share it. I appreciate the feedback and encouragement
Js-forex
Pindah-rata-order-1