Build your own Neural Network, with PHP!

Vítor Brandão (13.Jun.2018 at 19:10, 45 min)
Talk at PHPSW: Time for the Future, June 2018 (English - UK)

Rating: 5 of 5

Curious about all the hype around Machine Learning and Artificial Intelligence? Heard of "Neural Networks" and "Deep Learning" but confused about what it really means?

In this talk, you'll see what Artificial Neural Networks (ANN) look like and how they can "learn". And along the way, you'll discover how you can build your own ANN, with PHP of course!

Who are you?

Claim talk

Talk claims have been moved to the new site.

Please login to the new site to claim your talk

Want to comment on this talk? Log in or create a new account or comment anonymously

Write a comment

Please note: you are not logged in and will be posting anonymously!
= four minus three


Rating: 5 of 5

13.Jun.2018 at 20:04 by Lakshmi Balakrishnan (2 comments) via Web2 LIVE

It’s one of the best talks I’ve heard on machine learning and it’s definitely more interesting than the whole module I had at uni on the same topic.

There was a typo on one of the first few slides on weights. The word strength is missing an h. Maybe a bit of explanation of the math side of things would help, especially those who don’t remember the terminology. The math equations towards the end was a bit overwhelming, so if there’s any way of simplifying them, it would help.

Over an excellent talk!

Rating: 5 of 5

13.Jun.2018 at 20:05 by Mr Andrius Bartulis (9 comments) via Web2 LIVE

Great talk! Really enjoyed learning about neural networks! Even though I had to really pay attention to keep up with the various slides and maths, it made sense and explained the topic in a friendly way. Would like to learn more!

Rating: 4 of 5

13.Jun.2018 at 20:06 by Thierry Draper (5 comments) via Web2 LIVE

Knowledgeable speaker who clearly enjoys the subject. Great delivery, treading the line really well between a dry informative talk and making it interesting. I did get mildly confused by the title, though - "an introduction to neural networks illustrated using PHP" may be nearer to the output... if a mouthful!

Rating: 4 of 5

13.Jun.2018 at 20:06 by Rhys Laval (3 comments) via Web2 LIVE

Great talk, bringing a complex topic down to the basics. It may have been helpful to give further examples of use

Rating: 5 of 5

13.Jun.2018 at 20:07 by Lucia Velasco (17 comments) via Web2 LIVE

Really accessible intro - thank you for encouraging us to ask questions at any point! It was really helpful that you recommended some resources and that you interspersed the technical stuff with comedy.
I struggled to visualise what figure each step might produce - perhaps do an example annotating each line with an example value at that point?
I really enjoyed it and I have a much better understanding of ML, thank you!!

Rating: 5 of 5

13.Jun.2018 at 20:07 by Federico Vecco (4 comments) via Web2 LIVE

Excellent talk. A quite complex topic to explain in short time but it was delivered perfectly. Great

Rating: 4 of 5

13.Jun.2018 at 20:07 by Matt Kynaston (2 comments) via Web2 LIVE

Excellent - a simple enough example to cover in the time, and clear code to see how it’s done. Some more links to explanations on differentiation would have been good.

Rating: 4 of 5

13.Jun.2018 at 20:07 by Rob Wilson (14 comments) via Web2 LIVE

Excellent talk, slides were presented well, and Vítor knew his stuff. Some of the math involved did get a little complex, and I had to remember my A level maths course (which I did 20 years ago). Also, was a little rushed at the end, but this was due to the time limit imposed. Otherwise great talk, and I'm going to have a play with the code

Rating: 4 of 5

13.Jun.2018 at 20:08 by Mike Oram (17 comments) via Web2 LIVE

Great intro to a subject I had no idea about. Did get a bit mathsy but I guess it had too. The code was difficult to follow with the maths so perhaps split them a bit more for those that the maths over complicates things. Great delivery and clearly a strong understanding. No need for the disclaimer at the start :)

Rating: 4 of 5

13.Jun.2018 at 20:54 by Doug Fitzmaurice (8 comments) via Web2 LIVE

Good talk, it’s obviously a really complicated topic and hard to cover!

I think you showed the cat example too early, it shows a complex network with lots of features before we’ve seen a simple one.

I’d also suggest moving the Training explanation after showing an example of the XOR net with correct weight and bias, so we know what we’re working towards.

I’d also like to see an example of what the derivative and gradient changes for a single neuron. My lack of maths understanding means the last section looked like “do stuff to network until magic numbers give correct output”.

Rating: 4 of 5

14.Jun.2018 at 11:18 by Dave Liddament (67 comments) via Web2 LIVE

A great talk.

The topic was interesting. The slides were superb, some of the best I've seen. You spoke really well, it was clear and the right pace through out.

To improve... I wonder if you could still tell the same story but remove some of the maths. The maths could still exist, but maybe be in a blog post or markdown doc in the git repo which is referenced in your talk. It was pretty complicated so to really grasp it probably requires time sitting down reading.

Also maybe provide further reading slide at the end.

My final suggestion would be to submit this to a conference!

A great first full length talk.

Rating: 5 of 5

14.Jun.2018 at 11:40 by Craig Francis (4 comments) via Web2 LIVE

Thanks Vítor, that was a really good introduction on how Neural Networks can work. The use of PHP, while probably not ideal in creating a real system, does help with the understanding of the process (as I'm much more familiar in this language, so I can then focus on the ideas, rather than the syntax).

As to an improvement... I'm not sure what to suggest, in a way I wonder if there is a better example than modelling XOR, as while it keeps the number of inputs small (2), it's a very rigid structure which is hard to see as learning, other than looking at the final outputs, which shows that the neural network is confident of the result, but isn't overly sure (i.e. 0 vs 0.003).

Rating: 5 of 5

14.Jun.2018 at 12:34 by Kat Zien (20 comments) via Web2 LIVE

It was a very well-rehearsed talk and you were calm and engaging throughout! You picked a complex topic so well done for managing to explain it clearly and at a good pace in the very limited time you had. The maths went a little over my head so I agree with the other comments that maybe it would be better to leave some of it out and add links for people to read through and digest in their own time (purely because of the time constraints).

I really liked the way you highlighted the code to focus on as you were explaining it in your code snippets. Made it easy to follow along.

There were elements of fun which I liked, you had memes and made some funny comments throughout which gave what could have been a very academic and "dry" presentation a nice and approachable feel.

As Lucia mentioned already, it was great to see you making sure everyone is following along and has a chance to ask for clarifications before moving on to the next part.

And as Dave said, you should totally submit this to conferences :)

Well done and thank you, I enjoyed listening to your talk!

Rating: 5 of 5

16.Jun.2018 at 11:52 by Dan Ackroyd (8 comments) via Web2 LIVE

Apologies in advance if this comment makes no sense, I had to nip to the loo during the talk so could have missed it if you're already doing this.

I think you might be able to make the maths a bit easier to understand by showing the graph where the learning is seeking the minimum value and explain how the learning tries to get to the lowest point, before saying "in mathematics this is done by looking at the derivative and trying to find where it is zero".

Although everyone will have done derivatives at school, showing the curve and saying we're trying to find the lowest point gives a much easier handle for people to grok, before throwing the big words around.

Doug wrote: "I think you showed the cat example too early, it shows a complex network with lots of features before we’ve seen a simple one." - Second. Also it would be good to hold off showing any diagrams with hidden layers until a clear description of what hidden layers are, and why/when they are useful.

© 2018