http://i.imgur.com/dx7sVXj.jpg
pretty epic.
Would you like to play a game? Tic Tac Toe?
http://i.imgur.com/dx7sVXj.jpg
pretty epic.
Would you like to play a game? Tic Tac Toe?
Last edited by fuddam; 01-07-2013 at 09:23 PM.
One can never stop saying Thank You
aidanjt (01-07-2013),dbh (04-07-2013),Workaholic (04-07-2013)
How bizzare
I wonder how long it actually took them to stop attacking each other? I suppose if they tried every possible method to attack one another over and over then eventually they would fail enough times on each one to cause them to register it as a bad idea. You would have thought that the variable of just one new person entering the server would cause them to spring into life as it gives them all a whole load of new possibilities they haven't tried, it seems very odd they waited until he attacked on of them to do anything.
Nice but you failed at thread title, that should be quake3 bots
[rem IMG]https://i69.photobucket.com/albums/i45/pob_aka_robg/Spork/project_spork.jpg[rem /IMG] [rem IMG]https://i69.photobucket.com/albums/i45/pob_aka_robg/dichotomy/dichotomy_footer_zps1c040519.jpg[rem /IMG]
Pob's new mod, Soviet Pob Propaganda style Laptop.
"Are you suggesting that I can't punch an entire dimension into submission?" - Flying squirrel - The Red Panda Adventures
Sorry photobucket links broken
fuddam (01-07-2013)
Well that is interesting, bots that learn from mistake. That is... how skynet starts off
All the bot ai logfiles were the exact same size. I reckon they've just filled their logs and got stuck. I'd say this is a pretty unlikely situation and kudos to the devs for the fact the game didn't crash immediately!
That is brilliant. I didn't know that artificial intelligence capable of learning even existed, and in a video game at that. The described scenario sounds intriguing and creepy all at once.
John Carmack denied this yesterday on twitter, I didn't know what it was about at the time, but now it makes sense!
Yeah, I didn't want to be that guy, but it isn't how these simple AIs work.
The simplest way to understand the principles is to look at something called a Learning Classifier System (LCS). These are effectively a table, with results of "When I saw X, I tried Y Outcome was Z".
This means that when the AI sees X, it can look at what it has tried before, and what the outcome was. It also stores how many times it has tried the action. Obviously just because something was bad outcome once, doesn't mean it always will be.
Then you also have your exploration, this is how far you go away from something which looks like the best option. This is important because you might converge on a local maximum, rather than the best overall. To understand that think of Hill Climbing. If you think of the elevation of a climb up Snowden plotted, you will know you both go up and down on your way to the top. But how do you know you've reached the top? It isn't as simple as saying well we're going downhill now, so I must have just passed the top. Otherwise you'd still be in the carpark, declaring a tiny mound to be the top of Snowden.
This is why simple AIs always have exploration desires put in, normally with a random behaviour, this desire to explore is vital to not end up on a local maximum (that first hill!). You get much more advanced forms of these AIs used, with crossover mutations of genetic algorithms modelled on natures evolution. It is funny how the simplest single cell organism gives us a model for brilliant intelligence.
By watching organic evolution we see that life has a desire to mutate, to explore too. Someone might be born with fairer skin, almost randomly, and hey, it works for them in their climate, they succeed. The point is that the bots may well get tramatised with bad consiquences (bad Z in the equation above!) but they will still always try some Y, normally the bounds of it are tightly controlled.
At uni I nearly burnt out a little teaching robot I was playing with whilst setting up for a summer school, I had given it the simple logic, going forward good, going backwards bad. But I didn't have any exploration built in (I had a big bug in my code). This meant the thing kept trying to go forwards only to have to go backwards, so it learnt that going forwards was bad. This bad however wasn't enough to stop it competely, so it would try and go forward, only to 'scratch the itch' I had programmed it, it would then stop, and do it again. Problem was it was doing this by fraction of a second intervals (about 1 million times a second). Stepper motors don't like that
Anyway, Wikipedia has more on this, and a nice gif showing it. There is some example I remember from years back, but it is no doubt in java.
https://en.wikipedia.org/wiki/Hill_c...g#Local_maxima
throw new ArgumentException (String, String, Exception)
There are currently 1 users browsing this thread. (0 members and 1 guests)