Yang lain

Kecerdasan Buatan Telah Mencapai Batasan Beery


Sebuah syarikat baru mula membuat bir menggunakan kepintaran buatan dan algoritma yang menyumbang maklum balas pelanggan

Segera ketukan seperti ini akan mengandungi bir buatan robot.

Kecerdasan Buatan terus mencapai batas baru. ia boleh menang dalam bahaya, mengalahkan datuk catur, dan mendiagnosis keadaan perubatan.

Ia juga membuat gelombang besar dalam industri makanan oleh membilang kalori, bertindak sebagai tukang masak, dan mencipta resipi baru. Walau bagaimanapun, satu syarikat berusaha untuk membawa teknologi ke yang paling banyak amalan kuno manusia: pengambilan alkohol.

IntelligentX, sebuah syarikat yang berpusat di London, telah mengajar robot untuk membuat bir menggunakan algoritma. Pengguna boleh melayari Facebook dan berkongsi pendapat mereka mengenai bir dengan chatbot syarikat. Robot kemudian mengambil maklum balas itu dan mula membuat perubahan pada bir. Ia juga menggunakan proses membuat keputusan untuk mengetahui sama ada tweak berjaya.

Pengasas bersama Hew Leith menggariskan produk akhir: "Kami menggunakan AI untuk memberikan kemahiran manusia super kepada pembuat bir kami, yang membolehkan mereka menguji dan menerima maklum balas mengenai bir kami lebih cepat daripada sebelumnya."

Hasilnya mungkin luar biasa: bir yang terus meningkatkan dan menyempurnakan paletnya untuk memuaskan keinginan pelanggannya. Walau bagaimanapun, nampaknya beberapa tinjauan buruk boleh menyebabkan bir sesat; kecerdasan buatan telah ketawa gagal sebelum, walaupun di industri Makanan.

Walau bagaimanapun, percubaan itu pasti menyeronokkan - terutamanya pada anda tempat makan kegemaran.


& # x27Ia & # x27 mampu mencipta pengetahuan itu sendiri & # x27: Google melancarkan AI yang belajar sendiri

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

Dalam kejayaan utama kecerdasan buatan, AlphaGo Zero hanya mengambil masa tiga hari untuk menguasai permainan papan Go kuno Cina. tanpa pertolongan manusia

Terakhir diubah pada Rabu 14 Feb 2018 21.10 GMT

Kumpulan kecerdasan buatan Google, DeepMind, telah melancarkan penjelmaan terbaru program Go-playingnya, AlphaGo - AI yang sangat kuat sehingga memperoleh pengetahuan manusia beribu-ribu tahun mengenai permainan ini sebelum mencipta gerakan yang lebih baik, semuanya dalam ruang tiga hari.

Dinamakan AlphaGo Zero, program AI telah dipuji sebagai kemajuan besar kerana ia menguasai permainan papan kuno Cina dari awal, dan tanpa bantuan manusia selain diberitahu peraturan. Dalam permainan menentang versi 2015, yang terkenal mengalahkan Lee Sedol, grandmaster Korea Selatan, pada tahun berikutnya, AlphaGo Zero menang 100 hingga 0.

Pencapaian itu menandakan kejayaan dalam menuju AI tujuan umum yang dapat melakukan lebih banyak daripada mengalahkan manusia di permainan papan. Kerana AlphaGo Zero belajar sendiri dari papan tulis kosong, bakatnya kini dapat dialihkan kepada sejumlah masalah dunia nyata.

Di DeepMind, yang berpusat di London, AlphaGo Zero sedang berusaha bagaimana protein dilipat, satu cabaran saintifik besar yang dapat memberi penemuan ubat satu tembakan yang sangat diperlukan di lengan.

Perlawanan 3 AlphaGo vs Lee Sedol pada bulan Mac 2016. Gambar: Erikbenson

"Bagi kami, AlphaGo bukan sekadar memenangi permainan Go," kata Demis Hassabis, CEO DeepMind dan seorang penyelidik pasukan. "Itu juga merupakan langkah besar bagi kita untuk membangun algoritma tujuan umum ini." Sebilangan besar AI digambarkan sebagai "sempit" kerana mereka hanya melakukan satu tugas, seperti menerjemahkan bahasa atau mengenali wajah, tetapi AI tujuan umum berpotensi mengatasi manusia pada banyak tugas yang berbeza. Pada dekad berikutnya, Hassabis percaya bahawa keturunan AlphaGo akan bekerja bersama manusia sebagai pakar saintifik dan perubatan.

Versi AlphaGo sebelumnya mempelajari pergerakan mereka dengan melatih ribuan permainan yang dimainkan oleh amatur dan profesional manusia yang kuat. AlphaGo Zero tidak mempunyai pertolongan seperti itu. Sebaliknya, ia belajar semata-mata dengan bermain berjuta-juta kali. Ini dimulakan dengan meletakkan batu di papan Go secara rawak tetapi cepat meningkat ketika menemukan strategi menang.

David Silver menerangkan bagaimana program Go play AI, AlphaGo Zero, menemui pengetahuan baru dari awal. Kredit: DeepMind

"Lebih kuat daripada pendekatan sebelumnya kerana dengan tidak menggunakan data manusia, atau kepakaran manusia dengan cara apa pun, kami telah menghilangkan batasan pengetahuan manusia dan ia dapat membuat pengetahuan itu sendiri," kata David Silver, penyelidik utama AlphaGo.

Program ini mengumpulkan kemahirannya melalui prosedur yang disebut pembelajaran pengukuhan. Ini adalah kaedah yang sama di mana keseimbangan di satu sisi, dan lutut di sisi lain, membantu manusia menguasai seni menunggang basikal. Apabila AlphaGo Zero bermain dengan baik, kemungkinan besar akan diberikan kemenangan. Apabila ia melakukan gerakan yang tidak baik, ia hampir mendekati kerugian.

Demis Hassabis, Ketua Pegawai Eksekutif DeepMind: ‘Bagi kami, AlphaGo bukan sekadar memenangi permainan Go.’ Foto: DeepMind / Nature

Inti program adalah sekumpulan perisian "neuron" yang dihubungkan bersama untuk membentuk jaringan saraf tiruan. Untuk setiap giliran permainan, jaringan melihat posisi potongan di papan Go dan menghitung pergerakan mana yang mungkin dilakukan seterusnya dan kebarangkalian mereka membawa kemenangan. Selepas setiap permainan, ia mengemas kini rangkaian neuralnya, menjadikannya pemain yang lebih kuat untuk pertarungan seterusnya. Walaupun jauh lebih baik daripada versi sebelumnya, AlphaGo Zero adalah program yang lebih sederhana dan menguasai permainan dengan lebih cepat walaupun berlatih dengan lebih sedikit data dan berjalan pada komputer yang lebih kecil. Memandangkan lebih banyak masa, ia juga dapat mempelajari peraturan untuk dirinya sendiri, kata Silver.

Apa itu AI?

Kecerdasan Buatan mempunyai pelbagai definisi, tetapi secara umum ia bermaksud program yang menggunakan data untuk membina model dari beberapa aspek dunia. Model ini kemudian digunakan untuk membuat keputusan dan ramalan yang tepat mengenai peristiwa masa depan. Teknologi ini digunakan secara meluas, untuk memberikan pengenalan ucapan dan wajah, terjemahan bahasa, dan cadangan peribadi di laman web muzik, filem dan membeli-belah. Di masa depan, ia dapat memberikan kereta tanpa pemandu, pembantu peribadi pintar, dan grid tenaga pintar. AI berpotensi menjadikan organisasi lebih efektif dan efisien, tetapi teknologi itu menimbulkan masalah etika, tadbir urus, privasi dan undang-undang yang serius.

Menulis dalam jurnal Nature, para penyelidik menggambarkan bagaimana AlphaGo Zero memulai dengan sangat baik, maju ke tahap seorang amatur yang naif, dan akhirnya menggunakan gerakan yang sangat strategik yang digunakan oleh nenek, semuanya dalam beberapa hari. Ia menemui satu permainan biasa, yang disebut a joseki, dalam 10 jam pertama. Pergerakan lain, dengan nama seperti "longsoran kecil" dan "pincer bergerak ksatria" segera diikuti. Setelah tiga hari, program ini menemui langkah baru yang sedang dikaji oleh pakar manusia sekarang. Menariknya, program ini memahami beberapa gerakan lanjutan sebelum ia menemukan langkah yang lebih mudah, seperti corak yang disebut tangga yang cenderung dipahami pemain Go manusia sejak awal.

Artikel ini merangkumi kandungan yang dihoskan di gfycat.com. Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

"Ia menemui beberapa drama terbaik, josekis, dan kemudian melampaui drama-drama itu dan menemukan sesuatu yang lebih baik lagi," kata Hassabis. "Anda dapat melihatnya menemui semula pengetahuan manusia selama ribuan tahun."

Eleni Vasilaki, profesor ilmu saraf komputasi di Sheffield University, mengatakan ia adalah prestasi yang mengagumkan. "Ini mungkin menunjukkan bahawa dengan tidak melibatkan pakar manusia dalam latihannya, AlphaGo menemui gerakan yang lebih baik yang melampaui kecerdasan manusia pada permainan khusus ini," katanya. Tetapi dia menunjukkan bahwa, sementara komputer mengalahkan manusia di permainan yang melibatkan pengiraan dan ketepatan yang kompleks, mereka jauh dari yang sama dengan manusia dalam tugas lain. "AI gagal dalam tugas yang sangat mudah bagi manusia," katanya. "Lihat saja prestasi robot humanoid dalam tugas sehari-hari seperti berjalan, berlari dan menendang bola."

Tom Mitchell, seorang saintis komputer di Carnegie Mellon University di Pittsburgh menyebut AlphaGo Zero sebagai "pencapaian kejuruteraan yang luar biasa". Dia menambahkan: "Ini menutup buku tentang apakah manusia akan mengejar komputer di Go. Saya rasa jawapannya adalah tidak. Tetapi ia membuka buku baru, di mana komputer mengajar manusia bagaimana bermain Go lebih baik daripada yang biasa. "

David Silver menerangkan bagaimana program AI AlphaGo Zero belajar bermain Go. Kredit: DeepMind


& # x27Ia & # x27 mampu mencipta pengetahuan itu sendiri & # x27: Google melancarkan AI yang belajar sendiri

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

Dalam kejayaan utama kecerdasan buatan, AlphaGo Zero hanya mengambil masa tiga hari untuk menguasai permainan papan Go kuno Cina. tanpa pertolongan manusia

Terakhir diubah pada Rabu 14 Feb 2018 21.10 GMT

Kumpulan kecerdasan buatan Google, DeepMind, telah melancarkan penjelmaan terbaru program Go-playingnya, AlphaGo - AI yang sangat kuat sehingga ia memperoleh pengetahuan manusia beribu-ribu tahun mengenai permainan sebelum mencipta gerakan yang lebih baik sendiri, semuanya dalam ruang tiga hari.

Dinamakan AlphaGo Zero, program AI telah dipuji sebagai kemajuan besar kerana ia menguasai permainan papan kuno Cina dari awal, dan tanpa bantuan manusia selain diberitahu peraturan. Dalam permainan menentang versi 2015, yang terkenal mengalahkan Lee Sedol, grandmaster Korea Selatan, pada tahun berikutnya, AlphaGo Zero menang 100-0.

Pencapaian itu menandakan kejayaan dalam menuju AI tujuan umum yang dapat melakukan lebih banyak daripada mengalahkan manusia di permainan papan. Kerana AlphaGo Zero belajar sendiri dari papan tulis kosong, bakatnya kini dapat dialihkan kepada sejumlah masalah dunia nyata.

Di DeepMind, yang berpusat di London, AlphaGo Zero sedang berusaha bagaimana protein dilipat, satu cabaran saintifik besar yang dapat memberi penemuan ubat satu tembakan yang sangat diperlukan di lengan.

Perlawanan 3 AlphaGo vs Lee Sedol pada bulan Mac 2016. Gambar: Erikbenson

"Bagi kami, AlphaGo bukan sekadar memenangi permainan Go," kata Demis Hassabis, CEO DeepMind dan seorang penyelidik pasukan. "Itu juga merupakan langkah besar bagi kita untuk membangun algoritma tujuan umum ini." Sebilangan besar AI digambarkan sebagai "sempit" kerana mereka hanya melakukan satu tugas, seperti menerjemahkan bahasa atau mengenali wajah, tetapi AI tujuan umum berpotensi mengatasi manusia pada banyak tugas yang berbeza. Pada dekad berikutnya, Hassabis percaya bahawa keturunan AlphaGo akan bekerja bersama manusia sebagai pakar saintifik dan perubatan.

Versi AlphaGo sebelumnya mempelajari pergerakan mereka dengan melatih ribuan permainan yang dimainkan oleh amatur dan profesional manusia yang kuat. AlphaGo Zero tidak mempunyai pertolongan seperti itu. Sebaliknya, ia belajar semata-mata dengan bermain berjuta-juta kali. Ini dimulakan dengan meletakkan batu di papan Go secara rawak tetapi cepat meningkat ketika menemukan strategi menang.

David Silver menerangkan bagaimana program Go play AI, AlphaGo Zero, menemui pengetahuan baru dari awal. Kredit: DeepMind

"Lebih kuat daripada pendekatan sebelumnya kerana dengan tidak menggunakan data manusia, atau kepakaran manusia dengan cara apa pun, kami telah menghilangkan batasan pengetahuan manusia dan ia dapat membuat pengetahuan itu sendiri," kata David Silver, penyelidik utama AlphaGo.

Program ini mengumpulkan kemahirannya melalui prosedur yang disebut pembelajaran pengukuhan. Ini adalah kaedah yang sama di mana keseimbangan di satu sisi, dan lutut di sisi lain, membantu manusia menguasai seni menunggang basikal. Apabila AlphaGo Zero bermain dengan baik, kemungkinan besar akan diberikan kemenangan. Apabila ia melakukan gerakan yang tidak baik, ia akan mendekati kerugian.

Demis Hassabis, Ketua Pegawai Eksekutif DeepMind: ‘Bagi kami, AlphaGo bukan sekadar memenangi permainan Go.’ Foto: DeepMind / Nature

Inti program adalah sekumpulan perisian "neuron" yang dihubungkan bersama untuk membentuk jaringan saraf tiruan. Untuk setiap giliran permainan, jaringan melihat posisi potongan di papan Go dan menghitung pergerakan mana yang mungkin dilakukan seterusnya dan kebarangkalian mereka membawa kemenangan. Selepas setiap permainan, ia mengemas kini rangkaian neuralnya, menjadikannya pemain yang lebih kuat untuk pertarungan seterusnya. Walaupun jauh lebih baik daripada versi sebelumnya, AlphaGo Zero adalah program yang lebih sederhana dan menguasai permainan dengan lebih cepat walaupun berlatih dengan lebih sedikit data dan berjalan pada komputer yang lebih kecil. Memandangkan lebih banyak masa, ia juga dapat mempelajari peraturan itu sendiri, kata Silver.

Apa itu AI?

Kecerdasan Buatan mempunyai pelbagai definisi, tetapi secara umum ia bermaksud program yang menggunakan data untuk membina model dari beberapa aspek dunia. Model ini kemudian digunakan untuk membuat keputusan dan ramalan yang tepat mengenai peristiwa masa depan. Teknologi ini digunakan secara meluas, untuk memberikan pengenalan ucapan dan wajah, terjemahan bahasa, dan cadangan peribadi di laman web muzik, filem dan membeli-belah. Di masa depan, ia dapat memberikan kereta tanpa pemandu, pembantu peribadi pintar, dan grid tenaga pintar. AI berpotensi menjadikan organisasi lebih efektif dan efisien, tetapi teknologi itu menimbulkan masalah etika, tadbir urus, privasi dan undang-undang yang serius.

Menulis dalam jurnal Nature, para penyelidik menggambarkan bagaimana AlphaGo Zero memulai dengan sangat baik, maju ke tahap seorang amatur yang naif, dan akhirnya menggunakan gerakan yang sangat strategik yang digunakan oleh nenek, semuanya dalam beberapa hari. Ia menemui satu permainan biasa, yang disebut a joseki, dalam 10 jam pertama. Pergerakan lain, dengan nama seperti "longsoran kecil" dan "pincer bergerak ksatria" segera diikuti. Setelah tiga hari, program ini menemui langkah baru yang sedang dikaji oleh pakar manusia sekarang. Menariknya, program ini memahami beberapa gerakan lanjutan sebelum ia menemukan langkah yang lebih mudah, seperti corak yang disebut tangga yang cenderung dipahami pemain Go manusia sejak awal.

Artikel ini merangkumi kandungan yang dihoskan di gfycat.com. Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

"Ia menemui beberapa drama terbaik, josekis, dan kemudian melampaui drama-drama itu dan menemukan sesuatu yang lebih baik lagi," kata Hassabis. "Anda dapat melihatnya menemui semula pengetahuan manusia selama ribuan tahun."

Eleni Vasilaki, profesor ilmu saraf komputasi di Sheffield University, mengatakan ia adalah prestasi yang mengagumkan. "Ini mungkin menunjukkan bahawa dengan tidak melibatkan pakar manusia dalam latihannya, AlphaGo menemui gerakan yang lebih baik yang melampaui kecerdasan manusia pada permainan khusus ini," katanya. Tetapi dia menunjukkan bahwa, sementara komputer mengalahkan manusia di permainan yang melibatkan pengiraan dan ketepatan yang kompleks, mereka jauh dari yang sama dengan manusia dalam tugas lain. "AI gagal dalam tugas yang sangat mudah bagi manusia," katanya. "Lihat saja prestasi robot humanoid dalam tugas sehari-hari seperti berjalan, berlari dan menendang bola."

Tom Mitchell, seorang saintis komputer di Carnegie Mellon University di Pittsburgh menyebut AlphaGo Zero sebagai "pencapaian kejuruteraan yang luar biasa". Dia menambahkan: "Ini menutup buku tentang apakah manusia akan mengejar komputer di Go. Saya rasa jawapannya adalah tidak. Tetapi ia membuka buku baru, di mana komputer mengajar manusia bagaimana bermain Go lebih baik daripada yang biasa. "

David Silver menerangkan bagaimana program AI AlphaGo Zero belajar bermain Go. Kredit: DeepMind


& # x27Ia & # x27 mampu mencipta pengetahuan itu sendiri & # x27: Google melancarkan AI yang belajar sendiri

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

Dalam kejayaan utama kecerdasan buatan, AlphaGo Zero hanya mengambil masa tiga hari untuk menguasai permainan papan Go kuno Cina. tanpa pertolongan manusia

Terakhir diubah pada Rabu 14 Feb 2018 21.10 GMT

Kumpulan kecerdasan buatan Google, DeepMind, telah melancarkan penjelmaan terbaru program Go-playingnya, AlphaGo - AI yang sangat kuat sehingga memperoleh pengetahuan manusia beribu-ribu tahun mengenai permainan ini sebelum mencipta gerakan yang lebih baik, semuanya dalam ruang tiga hari.

Dinamakan AlphaGo Zero, program AI telah dipuji sebagai kemajuan besar kerana ia menguasai permainan papan kuno Cina dari awal, dan tanpa bantuan manusia selain diberitahu peraturan. Dalam permainan menentang versi 2015, yang terkenal mengalahkan Lee Sedol, grandmaster Korea Selatan, pada tahun berikutnya, AlphaGo Zero menang 100-0.

Pencapaian itu menandakan kejayaan dalam menuju AI tujuan umum yang dapat melakukan lebih banyak daripada mengalahkan manusia di permainan papan. Kerana AlphaGo Zero belajar sendiri dari papan tulis kosong, bakatnya kini dapat dialihkan kepada sejumlah masalah dunia nyata.

Di DeepMind, yang berpusat di London, AlphaGo Zero sedang berusaha bagaimana protein melipat, satu cabaran saintifik besar yang dapat memberi penemuan ubat satu tembakan yang sangat diperlukan di lengan.

Perlawanan 3 AlphaGo vs Lee Sedol pada bulan Mac 2016. Gambar: Erikbenson

"Bagi kami, AlphaGo bukan sekadar memenangi permainan Go," kata Demis Hassabis, CEO DeepMind dan seorang penyelidik pasukan. "Itu juga merupakan langkah besar bagi kita untuk membangun algoritma tujuan umum ini." Sebilangan besar AI digambarkan sebagai "sempit" kerana mereka hanya melakukan satu tugas, seperti menerjemahkan bahasa atau mengenali wajah, tetapi AI tujuan umum berpotensi mengatasi manusia pada banyak tugas yang berbeza. Pada dekad berikutnya, Hassabis percaya bahawa keturunan AlphaGo akan bekerja bersama manusia sebagai pakar saintifik dan perubatan.

Versi AlphaGo sebelumnya mempelajari pergerakan mereka dengan melatih ribuan permainan yang dimainkan oleh amatur dan profesional manusia yang kuat. AlphaGo Zero tidak mempunyai pertolongan seperti itu. Sebaliknya, ia belajar semata-mata dengan bermain berjuta-juta kali. Ini dimulakan dengan meletakkan batu di papan Go secara rawak tetapi cepat meningkat ketika menemukan strategi menang.

David Silver menerangkan bagaimana program Go play AI, AlphaGo Zero, menemui pengetahuan baru dari awal. Kredit: DeepMind

"Lebih kuat daripada pendekatan sebelumnya kerana dengan tidak menggunakan data manusia, atau kepakaran manusia dengan cara apa pun, kami telah menghilangkan batasan pengetahuan manusia dan ia dapat membuat pengetahuan itu sendiri," kata David Silver, penyelidik utama AlphaGo.

Program ini mengumpulkan kemahirannya melalui prosedur yang disebut pembelajaran pengukuhan. Ini adalah kaedah yang sama di mana keseimbangan di satu sisi, dan lutut di sisi lain, membantu manusia menguasai seni menunggang basikal. Apabila AlphaGo Zero bermain dengan baik, kemungkinan besar akan diberikan kemenangan. Apabila ia melakukan gerakan yang tidak baik, ia akan mendekati kerugian.

Demis Hassabis, Ketua Pegawai Eksekutif DeepMind: ‘Bagi kami, AlphaGo bukan sekadar memenangi permainan Go.’ Foto: DeepMind / Nature

Inti program adalah sekumpulan perisian "neuron" yang dihubungkan bersama untuk membentuk jaringan saraf tiruan. Untuk setiap giliran permainan, jaringan melihat posisi potongan di papan Go dan mengira pergerakan mana yang mungkin dilakukan seterusnya dan kebarangkalian mereka membawa kemenangan. Selepas setiap permainan, ia mengemas kini rangkaian neuralnya, menjadikannya pemain yang lebih kuat untuk pertarungan seterusnya. Walaupun jauh lebih baik daripada versi sebelumnya, AlphaGo Zero adalah program yang lebih sederhana dan menguasai permainan dengan lebih cepat walaupun berlatih dengan lebih sedikit data dan berjalan pada komputer yang lebih kecil. Memandangkan lebih banyak masa, ia juga dapat mempelajari peraturan untuk dirinya sendiri, kata Silver.

Apa itu AI?

Kecerdasan Buatan mempunyai pelbagai definisi, tetapi secara umum ia bermaksud program yang menggunakan data untuk membina model dari beberapa aspek dunia. Model ini kemudian digunakan untuk membuat keputusan dan ramalan yang tepat mengenai peristiwa masa depan. Teknologi ini digunakan secara meluas, untuk memberikan pengenalan ucapan dan wajah, terjemahan bahasa, dan cadangan peribadi di laman web muzik, filem dan membeli-belah. Di masa depan, ia dapat memberikan kereta tanpa pemandu, pembantu peribadi pintar, dan grid tenaga pintar. AI berpotensi menjadikan organisasi lebih efektif dan efisien, tetapi teknologi itu menimbulkan masalah etika, tadbir urus, privasi dan undang-undang yang serius.

Menulis dalam jurnal Nature, para penyelidik menggambarkan bagaimana AlphaGo Zero memulai dengan sangat baik, maju ke tahap seorang amatur yang naif, dan akhirnya menggunakan gerakan yang sangat strategik yang digunakan oleh nenek, semuanya dalam beberapa hari. Ia menemui satu permainan biasa, yang disebut a joseki, dalam 10 jam pertama. Pergerakan lain, dengan nama seperti "longsoran kecil" dan "pincer bergerak ksatria" segera diikuti. Setelah tiga hari, program ini menemui langkah baru yang sedang dikaji oleh pakar manusia sekarang. Menariknya, program ini memahami beberapa gerakan lanjutan sebelum ia menemukan langkah yang lebih mudah, seperti corak yang disebut tangga yang cenderung dipahami pemain Go manusia sejak awal.

Artikel ini merangkumi kandungan yang dihoskan di gfycat.com. Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

"Ia menemui beberapa drama terbaik, josekis, dan kemudian melampaui drama-drama itu dan menemukan sesuatu yang lebih baik lagi," kata Hassabis. "Anda dapat melihatnya menemui semula pengetahuan manusia selama ribuan tahun."

Eleni Vasilaki, profesor ilmu saraf komputasi di Sheffield University, mengatakan ia adalah prestasi yang mengagumkan. "Ini mungkin menunjukkan bahawa dengan tidak melibatkan pakar manusia dalam latihannya, AlphaGo menemui gerakan yang lebih baik yang melampaui kecerdasan manusia pada permainan khusus ini," katanya. Tetapi dia menunjukkan bahwa, sementara komputer mengalahkan manusia di permainan yang melibatkan pengiraan dan ketepatan yang kompleks, mereka jauh dari yang sama dengan manusia dalam tugas lain. "AI gagal dalam tugas yang sangat mudah bagi manusia," katanya. "Lihat saja prestasi robot humanoid dalam tugas sehari-hari seperti berjalan, berlari dan menendang bola."

Tom Mitchell, seorang saintis komputer di Carnegie Mellon University di Pittsburgh menyebut AlphaGo Zero sebagai "pencapaian kejuruteraan yang luar biasa". Dia menambahkan: "Ini menutup buku tentang apakah manusia akan mengejar komputer di Go. Saya rasa jawapannya adalah tidak. Tetapi ia membuka buku baru, di mana komputer mengajar manusia bagaimana bermain Go lebih baik daripada yang biasa. "

David Silver menerangkan bagaimana program AI AlphaGo Zero belajar bermain Go. Kredit: DeepMind


& # x27Ia dapat mencipta pengetahuan itu sendiri & # x27: Google melancarkan AI yang belajar sendiri

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

Dalam kejayaan utama kecerdasan buatan, AlphaGo Zero hanya mengambil masa tiga hari untuk menguasai permainan papan Go kuno Cina. tanpa pertolongan manusia

Terakhir diubah pada Rabu 14 Feb 2018 21.10 GMT

Kumpulan kecerdasan buatan Google, DeepMind, telah melancarkan penjelmaan terbaru program Go-playingnya, AlphaGo - AI yang sangat kuat sehingga memperoleh pengetahuan manusia beribu-ribu tahun mengenai permainan ini sebelum mencipta gerakan yang lebih baik, semuanya dalam ruang tiga hari.

Dinamakan AlphaGo Zero, program AI telah dipuji sebagai kemajuan besar kerana ia menguasai permainan papan kuno Cina dari awal, dan tanpa bantuan manusia selain diberitahu peraturan. Dalam permainan menentang versi 2015, yang terkenal mengalahkan Lee Sedol, grandmaster Korea Selatan, pada tahun berikutnya, AlphaGo Zero menang 100-0.

Pencapaian itu menandakan kejayaan dalam menuju AI tujuan umum yang dapat melakukan lebih banyak daripada mengalahkan manusia di permainan papan. Kerana AlphaGo Zero belajar sendiri dari papan tulis kosong, bakatnya kini dapat dialihkan kepada sejumlah masalah dunia nyata.

Di DeepMind, yang berpusat di London, AlphaGo Zero sedang berusaha bagaimana protein dilipat, satu cabaran saintifik besar yang dapat memberi penemuan ubat satu tembakan yang sangat diperlukan di lengan.

Perlawanan 3 AlphaGo vs Lee Sedol pada bulan Mac 2016. Gambar: Erikbenson

"Bagi kami, AlphaGo bukan sekadar memenangi permainan Go," kata Demis Hassabis, CEO DeepMind dan seorang penyelidik pasukan. "Itu juga merupakan langkah besar bagi kita untuk membangun algoritma tujuan umum ini." Sebilangan besar AI digambarkan sebagai "sempit" kerana mereka hanya melakukan satu tugas, seperti menerjemahkan bahasa atau mengenali wajah, tetapi AI tujuan umum berpotensi mengatasi manusia pada banyak tugas yang berbeza. Pada dekad berikutnya, Hassabis percaya bahawa keturunan AlphaGo akan bekerja bersama manusia sebagai pakar saintifik dan perubatan.

Versi sebelumnya AlphaGo mempelajari pergerakan mereka dengan melatih ribuan permainan yang dimainkan oleh amatur dan profesional manusia yang kuat. AlphaGo Zero tidak mempunyai pertolongan seperti itu. Sebaliknya, ia belajar semata-mata dengan bermain berjuta-juta kali. Ini dimulakan dengan meletakkan batu di papan Go secara rawak tetapi cepat meningkat ketika menemukan strategi menang.

David Silver menerangkan bagaimana program Go play AI, AlphaGo Zero, menemui pengetahuan baru dari awal. Kredit: DeepMind

"Lebih kuat daripada pendekatan sebelumnya kerana dengan tidak menggunakan data manusia, atau kepakaran manusia dengan cara apa pun, kami telah menghilangkan batasan pengetahuan manusia dan ia dapat membuat pengetahuan itu sendiri," kata David Silver, penyelidik utama AlphaGo.

Program ini mengumpulkan kemahirannya melalui prosedur yang disebut pembelajaran pengukuhan. Ini adalah kaedah yang sama di mana keseimbangan di satu sisi, dan lutut di sisi lain, membantu manusia menguasai seni menunggang basikal. Apabila AlphaGo Zero bermain dengan baik, kemungkinan besar akan diberikan kemenangan. Apabila ia melakukan gerakan yang tidak baik, ia hampir mendekati kerugian.

Demis Hassabis, Ketua Pegawai Eksekutif DeepMind: ‘Bagi kami, AlphaGo bukan sekadar memenangi permainan Go.’ Foto: DeepMind / Nature

Inti program adalah sekumpulan perisian "neuron" yang dihubungkan bersama untuk membentuk jaringan saraf tiruan. Untuk setiap giliran permainan, jaringan melihat posisi potongan di papan Go dan menghitung pergerakan mana yang mungkin dilakukan seterusnya dan kebarangkalian mereka membawa kemenangan. Selepas setiap permainan, ia mengemas kini rangkaian neuralnya, menjadikannya pemain yang lebih kuat untuk pertarungan seterusnya. Walaupun jauh lebih baik daripada versi sebelumnya, AlphaGo Zero adalah program yang lebih sederhana dan menguasai permainan dengan lebih cepat walaupun berlatih dengan lebih sedikit data dan berjalan pada komputer yang lebih kecil. Memandangkan lebih banyak masa, ia juga dapat mempelajari peraturan untuk dirinya sendiri, kata Silver.

Apa itu AI?

Kecerdasan Buatan mempunyai pelbagai definisi, tetapi secara umum ia bermaksud program yang menggunakan data untuk membina model dari beberapa aspek dunia. Model ini kemudian digunakan untuk membuat keputusan dan ramalan yang tepat mengenai peristiwa masa depan. Teknologi ini digunakan secara meluas, untuk memberikan pengenalan ucapan dan wajah, terjemahan bahasa, dan cadangan peribadi di laman web muzik, filem dan membeli-belah. Di masa depan, ia dapat memberikan kereta tanpa pemandu, pembantu peribadi pintar, dan grid tenaga pintar. AI berpotensi menjadikan organisasi lebih efektif dan efisien, tetapi teknologi itu menimbulkan masalah etika, tadbir urus, privasi dan undang-undang yang serius.

Menulis dalam jurnal Nature, para penyelidik menggambarkan bagaimana AlphaGo Zero memulai dengan sangat baik, maju ke tahap seorang amatur yang naif, dan akhirnya menggunakan gerakan yang sangat strategik yang digunakan oleh nenek, semuanya dalam beberapa hari. Ia menemui satu permainan biasa, yang disebut a joseki, dalam 10 jam pertama. Pergerakan lain, dengan nama seperti "longsoran kecil" dan "pincer bergerak ksatria" segera diikuti. Setelah tiga hari, program ini menemui langkah baru yang sedang dikaji oleh pakar manusia sekarang. Menariknya, program ini memahami beberapa gerakan lanjutan sebelum ia menemukan langkah yang lebih mudah, seperti corak yang disebut tangga yang cenderung dipahami oleh pemain Go manusia sejak awal.

Artikel ini merangkumi kandungan yang dihoskan di gfycat.com. Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

"Ini menemui beberapa drama terbaik, josekis, dan kemudian melangkaui drama-drama itu dan menemukan sesuatu yang lebih baik lagi," kata Hassabis. "Anda dapat melihatnya menemui semula pengetahuan manusia selama ribuan tahun."

Eleni Vasilaki, profesor ilmu saraf komputasi di Sheffield University, mengatakan ia adalah prestasi yang mengagumkan. "Ini mungkin menunjukkan bahawa dengan tidak melibatkan pakar manusia dalam latihannya, AlphaGo menemui gerakan yang lebih baik yang melampaui kecerdasan manusia pada permainan khusus ini," katanya. Tapi dia menunjukkan bahwa, sementara komputer mengalahkan manusia di permainan yang melibatkan pengiraan dan ketepatan yang kompleks, mereka jauh dari yang sama dengan manusia dalam tugas lain. "AI gagal dalam tugas yang sangat mudah bagi manusia," katanya. "Lihat saja prestasi robot humanoid dalam tugas sehari-hari seperti berjalan, berlari dan menendang bola."

Tom Mitchell, seorang saintis komputer di Carnegie Mellon University di Pittsburgh menyebut AlphaGo Zero sebagai "pencapaian kejuruteraan yang luar biasa". Dia menambahkan: "Ini menutup buku tentang apakah manusia akan mengejar komputer di Go. Saya rasa jawapannya adalah tidak. Tetapi ia membuka buku baru, di mana komputer mengajar manusia bagaimana bermain Go lebih baik daripada yang biasa. "

David Silver menerangkan bagaimana program AI AlphaGo Zero belajar bermain Go. Kredit: DeepMind


& # x27Ia dapat mencipta pengetahuan itu sendiri & # x27: Google melancarkan AI yang belajar sendiri

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

AlphaGo Zero mengalahkan pendahulunya pada tahun 2015, yang berjaya mengalahkan grandmaster Lee Sedol, 100 permainan Go to 0.

Dalam kejayaan utama kecerdasan buatan, AlphaGo Zero hanya mengambil masa tiga hari untuk menguasai permainan papan Go kuno Cina. tanpa pertolongan manusia

Terakhir diubah pada Rabu 14 Feb 2018 21.10 GMT

Kumpulan kecerdasan buatan Google, DeepMind, telah melancarkan penjelmaan terbaru program Go-playing-nya, AlphaGo - AI yang sangat kuat sehingga memperoleh pengetahuan manusia beribu-ribu tahun mengenai permainan sebelum mencipta gerakan yang lebih baik sendiri, semuanya dalam ruang tiga hari.

Dinamakan AlphaGo Zero, program AI telah dipuji sebagai kemajuan besar kerana ia menguasai permainan papan kuno Cina dari awal, dan tanpa bantuan manusia selain diberitahu peraturan. Dalam permainan menentang versi 2015, yang terkenal mengalahkan Lee Sedol, grandmaster Korea Selatan, pada tahun berikutnya, AlphaGo Zero menang 100-0.

Pencapaian itu menandakan kejayaan dalam menuju AI tujuan umum yang dapat melakukan lebih banyak daripada mengalahkan manusia di permainan papan. Kerana AlphaGo Zero belajar sendiri dari papan tulis kosong, bakatnya kini dapat dialihkan kepada sejumlah masalah dunia nyata.

Di DeepMind, yang berpusat di London, AlphaGo Zero sedang berusaha bagaimana protein dilipat, satu cabaran saintifik besar yang dapat memberi penemuan ubat satu tembakan yang sangat diperlukan di lengan.

Perlawanan 3 AlphaGo vs Lee Sedol pada bulan Mac 2016. Gambar: Erikbenson

"Bagi kami, AlphaGo bukan sekadar memenangi permainan Go," kata Demis Hassabis, CEO DeepMind dan seorang penyelidik pasukan. "Itu juga merupakan langkah besar bagi kita untuk membangun algoritma tujuan umum ini." Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


'It's able to create knowledge itself': Google unveils AI that learns on its own

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go . with no human help

Last modified on Wed 14 Feb 2018 21.10 GMT

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


'It's able to create knowledge itself': Google unveils AI that learns on its own

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go . with no human help

Last modified on Wed 14 Feb 2018 21.10 GMT

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


'It's able to create knowledge itself': Google unveils AI that learns on its own

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go . with no human help

Last modified on Wed 14 Feb 2018 21.10 GMT

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


'It's able to create knowledge itself': Google unveils AI that learns on its own

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go . with no human help

Last modified on Wed 14 Feb 2018 21.10 GMT

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


'It's able to create knowledge itself': Google unveils AI that learns on its own

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

AlphaGo Zero beat its 2015 predecessor, which vanquished grandmaster Lee Sedol, 100 games of Go to 0.

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go . with no human help

Last modified on Wed 14 Feb 2018 21.10 GMT

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis, CEO of DeepMind: ‘For us, AlphaGo wasn’t just about winning the game of Go.’ Photograph: DeepMind/Nature

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said.

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

This article includes content hosted on gfycat.com . Kami meminta izin anda sebelum sesuatu dimuatkan, kerana penyedia mungkin menggunakan kuki dan teknologi lain. Untuk melihat kandungan ini, klik & # x27Bolehkan dan teruskan & # x27.

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind


Tonton videonya: Vedar Vedski - Three types of intelligence (Januari 2022).