Honda Insight Forum banner
21 - 40 of 55 Posts

· Registered
Joined
·
8,686 Posts
Discussion Starter · #21 ·
Just a quick follow-up on what I was writing about above, and then on to something else I want to mention...

I ended up taking a closer look at the self discharge data I mentioned above, and overall it's a bit too inconclusive, not good enough. I posted a couple graphs based on those data in another thread and explained it a bit there: The quintessential Insight NiMH voltage thread

The gist of it is this: Discharging a bit off the top will mitigate the impact of uneven self discharge rates on cell-to-cell balance only if the faster self discharge cells tend to self discharge even faster at higher charge states/voltages. I just can't conclude that that's the case from the weak dataset I have, and I don't care enough about it to create better data.

* * *

So, since I got back from the Danville Insight meet I've more or less switched to using my pack in the top charge state range, so I've been looking closer at how the car deals with the top. There's really a lot of interesting stuff going on, a lot of things to talk about. For now I just want to mention a couple things about the high charge state cutoff 'algorithms' in play. Mainly, there must be a quite refined program going on in one or more of the computers that makes sure you're not overcharging a cell, stick-pair, pack - whatever...

I think it was here or maybe in another thread where I talked a lot about the low-end slope detection algorithm in play: the BCM is able to detect when a cell is empty by calculating the slope of stick-pair voltage discharge curves - a steep slope reflects an empty cell and the BCM (or MCM) throttles discharge current and then throws a neg recal on the second detection of steep slope. I mentioned at the time that I thought it was likely something similar plays out at the top end, to determine full or too-full... Can't say I know that's what happens, but definitely something similar and very iterative does play out.

I've been resetting state of charge with the OBDIIC&C and more or less trying to stuff the pack, teetering at the edge of when this high cutoff algorithm kicks in. I noticed a handful of things today that I've either never noticed or never really thought too much about. Here's a couple of them off the top of my head:

-at some juncture during the sequence of 'full detection' parameters, one of the computers commands a discharge, the 12V load will be sourced directly from the IMA pack rather than the motor.

This is a really weird little program. I can't tell exactly what triggers it, and then it usually only lasts until I 'press the clutch pedal', i.e. I can trigger my calpod switch, ON then OFF quickly, and that will disable this drain.

Sometimes when this drain is happening, subsequent regen will trigger the dash CHRG lights - but OBDIIC&C shows no current. So, it's like the BCM or ECM triggers that discharge/drain, but perhaps the MCM doesn't get the message(?) - whatever drives the dash regen lights, that's still acting like everything's normal...

This drain seems to be an 'afterthought': it's not the main/first high cutoff behavior, it seems to happen only after 'something else' happens first. For instance, maybe an initial high tap voltage or steep slope is detected and regen current is throttled/limited. But then, perhaps a high tap resting voltage is detected, perhaps for a set duration, and then the drain will kick in... I know that the drain will normally kick in once the nominal charge state reaches its normal set max, such as 81%. But if you're manipulating the system, such as by resetting nominal charge state from 80% to 75%, this normal trigger is defeated, and the other real-time monitored parameters or what-not come into play, are revealed, etc...

-It looks like the absolute cutoff is resting voltage of 17.4V tap level, (1.45V per cell), or equivalent.

I can watch total pack voltage and current during regen and see that the pack itself isn't quite full; typically I'm keeping an eye out for about 186V at about 6.5 amps as an indicator of truly full. I've gotten closer, but still quite far away from that. I think I've seen maybe 180V at maybe 10 amps. But the highest resting voltage I've seen is about 174V, and usually I haven't been able to get that to 'stick' for long; 174V for maybe 30 seconds, and then a more stable 172-173V. At this point the 'car' is not allowing any more charge.

You can charge substantially more than the car would normally allow if you just work the system a little. For example, with one of the BCMs that pos recal to 75%, you can charge from 75% to 80%, reset with OBDIIC&C to 75%, charge another 5%, reset again to 75%, and so forth. But then, once you start seeing the 'automatic drain', you can discharge just a little to bring voltage down, yet then charge even more than you discharged without triggering the absolute top end cutoffs.

It all seems precariously designed around tap voltages - voltages that suffer a ton of hysteresis. I'm pretty sure OEM Insight cells, and probably Civic cells, can exhibit a fairly large degree of voltage hysteresis at the top end (i.e. the voltage can vary a lot), but that variation is due to short-lived, temporary, reversible electro-chemical phenomena. Does the BCM adequately deal with this? I don't think it does.

It seems like the BCM must have fixed voltage thresholds (probably adjusted for current and temperature), and once a tap hits the threshold, or once a steep slope is detected, charging is done. But subsequent usage around that charge state can 'loosen' things up: similar to how low charge state usage can raise sagging voltages, high charge state usage can lower peaky voltages... After this 'loosening', you can charge more while staying under the absolute cutoffs...

Personally, with my pack in its current state, I'm thinking a lot of this extra charge I'm able to do probably stems from me having used the pack at rock bottom charge state for the last month or so. I did cycle up some times during that low end usage, but most usage has remained low. I don't really have the greatest data, but after that low charge state usage, the first time I cycled up I was able to charge the pack to an adjusted, estimated real charge state of only about 30%, i.e. the car pos recal-ed at what I estimate to be only about 30% true charge state - not the '75%' you'd expect. That was like two weeks ago. Since then I've concentrated usage toward the 'high' end (above this real 30%, and usually as high as I could go) and now my estimated adjusted true charge state figure is at 67% (that's probably a slight under-estimate, though)...

In other words, two weeks ago I could charge the pack to 30%, now I can charge the pack to 67%. Pretty sure this would never happen were I not juking the system. I'm not sure if grid charging and discharging would accomplish the same thing... It wouldn't if the treatment intervals were too far apart - more than 6 months? 3 months? I imagine that would depend on the condition of the cells.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #22 ·
Was looking a little more at the top-end cutoff behavior today, particularly that automatic discharge/drain. It seems like it has to be, or at least can be, triggered by something that's transient, short-lived and/or the 'drain command' itself is just a one-shot deal, like the command says, 'discharge until you see XXX', "XXX" being, for example, clutch pedal press or probably any subsequent IMA usage, like assist or regen, after which it resets and looks for the trigger parameter again, whatever it is...

If I had to guess I'd say it's probably tap voltage slope: when I do coasting regen and it's a modest rate, say 7-12 amps, total voltage might be at like 175-180V, but voltage on a single tap is probably increasing faster than others (i.e. it's more charged, or at least one cell in the stick pair is). I can do this repeatedly and the drain is invoked almost in lock-step with whatever I'm doing. Hard to explain.

I can gauge just how close the pack is getting to 'full' and I can see just how much I'm able to input, and I get a feel for when the drain is going to trigger, and the pace and rhythm has the feel of a cell reaching full, as if I were charging a cell on the bench. Charge slope near full gets steeper and steeper (that is, until it peaks). So, the sense I get is that slope is being measured under the slight charge load, and once it reaches the set threshold, the drain kicks-in. But, if I disable the drain such as by hitting the calpod and try to trigger it again, it will happen again - sooner. If I discharge a little and repeat the process, the drain takes a bit to kick in. I disable the drain, try to trigger again, it happens sooner. And again and it happens sooner, etc etc...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #23 ·
....Charge slope near full gets steeper and steeper (that is, until it peaks). So, the sense I get is that slope is being measured under the slight charge load, and once it reaches the set threshold, the drain kicks-in. But....
This isn't exactly true. Charge slope gets steeper and steeper, but only up to a point, after which it gets shallower and shallower until the cell is full, voltage peaks and flattens-out, and then falls if charging continues. IF the BCM uses slope detection at the top, I imagine it would use the steeper and steeper part of the curve, but it seems like it could use the shallower and shallower part - either also or in addition to the steeper and steeper part. For example, perhaps it detects when the voltage curve gets steeper and stepper and implements some controlling behavior, such as regen throttling. But after that, perhaps under the right conditions, it also detects when the voltage curve gets shallower and shallower - and implements more aggressive regen throttling or outright disabling... I haven't really seen absolute disabling, not when nominal charge state hasn't reached 81%...

In any event, was looking a bit at regen limit flag "Rlf" on OBDIIC&C today, while doing stuff similar to what's described in previous post. In general, Rlf reads 0 when regen isn't throttled and 1 when it is. I said above that if I had to guess I'd say slope was being detected and triggers throttling and/or drain. But watching Rlf it almost seems like there could be multiple criteria at play. For example, most often I would see Rlf trigger 1 only after the regen event was over - do some coasting or braking regen and when back on throttle Rlf flips to 1. You'd think that if slope were the controlling parameter that Rlf would flip 1 during the regen, not after, wouldn't we?... Flipping 1 after the regen makes it seem more like open circuit voltage/resting voltage is in play. But, I also saw Rlf flip to 1 during regen, so perhaps both slope and resting voltage, or even max voltage loaded - they all could be in play...

Rlf doesn't stay at 1 (under these circumstances; as I recall it does if you let nominal SoC max-out and auto drain is locked in). It only flips to 1 for brief periods. However, throttling/drain behavior sticks around even though Rlf isn't 1. That makes me think indeed a timer is in play - time and/or a cancelling behavior, such as assist or depressing the clutch (turning 'calpod' on and off)...

If you have a BCM that pos recals to 75% rather than ~81%, it seems like you have a lot more control of how much more you can charge the pack at the top. Resetting SoC from say 80% to 75% and then stuffing it some more allows way more charge than resetting low repeatedly and letting the car pos recal on its own. I can reset low and get pos recal say a couple times, but after that additional resets low don't allow any more charge, pos recal happens right away. But, continuing to regen above 75%, taking it up to 80%, resetting back down to 75% and repeating the process, juking the BCM's top-end algorithms, can stuff a ton more into the pack. I've added something like 20% just over the past two drives...

I have one tap that's about 50-100mV higher than the others, when loaded at around 6 amps (say 17.10V vs. 17-17.05V). I think that's due to persistent, 'hard-core' high IR (rather than high resistance due to electrochemical stuff). I imagine that tap might cause 'premature pos recal' or 'premature full'. Now, I think it would cause premature full or whatever if the high cutoff parameter were high loaded voltage, possibly high resting voltage. But I don't think it would if the parameter were slope, steep or otherwise: I'm pretty sure 'high IR' just shifts the curve higher (or lower on discharge), but it doesn't change the contour, in general. Normally I'd think this tap with a high IR cell (or 2) would have a higher voltage spike at the top under charge load, but the voltage would drop lower than other taps when the load was removed. Yet, these NiMH cells, the ones that have persistent high IR, seem like their voltages can kind of get stuck up there, they don't fall lower... I don't really know why this happens, how that works... So, persistent high IR = premature full if trigger parameters are loaded voltage and resting voltage, I guess, but probably not steep slope...
 

· Administrator
Joined
·
14,392 Posts
Maybe not the right thread but you are the NIMH guru.

Modelling packs..


Could we model a packs behaviour in a sophisticated spreadsheet with nice graphs as we step thru data or time?

I'm rubbish with spreadsheets so this is out of my league.

But i'm thinking build a spreadsheet with 20 sticks data, each assigned a value for capacity, voltage, IR and most importantly self discharge rate.

The spread sheet allows you input nominal soc as start point, then it plots daily stick soc etc and calculates pack imbalance and whatever else we fancy looking at for say however many day/weeks/months we want. ..

A bit like weather modeling. We could predict how far out of balance a pack would be after 20/30/90 days etc

If you quantified a set of 20 sticks accurately and input that data into the model we could see how it would react to charing/discharging/cycling etc etc.

Depending on how clever the spreadsheet is we could add peukert effect and natural balancing due to efficiencies etc etc etc.

A shared google sheet might be best as several people could work on it.
If you think this should have another thread that's fine.

We might be able to determine if self discharge is linked closely and predictably with Internal resistance or other factors like terminal voltage etc.

Meaning that easy to determine IR could be used to predict or model SD without having to actually measure the stick SD rate..

Just lockdown thinking..
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #25 ·
^ hmm, I think I understand everything, and it makes sense, up to the last couple lines. Pretty sure self discharge isn't correlated with 'IR', or if it is it's a negative correlation ('high IR' correlated with slower self discharge)...

In general, are you talking about a theoretical model, where you plug-in fictitious values, where the overall model is simply used to inform, educate, or a real one, where you plug-in real values and try to, say, diagnose a pack? Sounds like you're talking mainly about the former most of the time... I think a very simple model might be doable, and might be useful, though I doubt anyone would be interested.

I think anything that tries to get too 'real-world' would get too complicated and would probably end up missing the forest for the trees...

Even just modelling 20 sticks (perhaps 10 taps) with a couple variables, like capacity and self discharge, within the confines of management that has only a couple parameters, like top and bottom voltage cutoffs, over time, would require a lot of rows and columns and calcs... hmm, I don't think I know enough to 'model' even a single cell...

I don't think I'd be up for it. I barely enjoy 'modelling' my whole pack with the simplest of data and calcs, let alone trying to do 20 sticks with added complexity. I can't think of anyone else around here who would care enough to participate... I don't think anyone even looks at the simple charts I've done, which aim to elucidate relatively simple (though fundamental) concepts; I imagine it'd be even more...oppressive for people to 'look at' a 20 stick model...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #26 ·
Took my pack back down to the bottom today and watched assist limit flag ('Alf'), still on 010 BCM. A couple things I noticed:

-assist gets throttled when nominal charge state gets low, I think it was around 30%, but I wasn't seeing the assist limit flag.

-I reset nominal SoC from around 28% to 40%. As mentioned earlier, with 305 BCM I can do that and defeat throttling, but with 010 BCM I wasn't able to do that, last time. Now I was able to do that. But, it only lasted until Alf triggered, which wasn't too long after...

It looks like Alf is triggered by tap voltage slope detection (near empty cell), like every time a steep slope is detected Alf flips to 1 (and back to zero). And every time Alf flips to 1 assist is immediately throttled. With the 010 BCM I can charge back up a little, like 5-10%, and then use assist freely as long as nominal SoC is set artificially high (such as 40% rather than 28%) and Alf isn't triggered. With 305 BCM I'd be allowed two steep slope detections, the first one at whatever current, the second one low current, and then a neg recal.

* * *
One other thing I've been noticing in general, and something I'm starting to believe, is that my pack doesn't perform as good when it's charged high and used high versus when I use it low. I had it charged to probably a true 75%, but it didn't take much assist, maybe 10-20% of capacity, for total voltage to look pretty saggy. Like, after about only 10-20% usage, total voltage was below 144V at about 20 amps. When I've used the pack low, I've seen higher voltages longer and typically not below about 144V until close to the bitter end. Of course, if I'm using it low I'm closer to the bitter end to begin with. But, I know I can get more than say 10-20% of usable capacity at loftier voltages, I'm thinking it's more like 35-45%...

This is pretty hard to explain. Earlier I had been thinking that low-end usage restores 'curves', restores performance, across a whole absolute charge state range. I can use the pack low and see performance down low improve, and I've generally assumed that improvement at the low end meant improvement at the top as well, at the same time, that they go hand in hand. But over time I've come to see that that's not the case.

I'm pretty sure that low end works better than high end, and I'm not sure why that's the case. But even so, a trade-off is happening: low-end performance comes at the expense of high end performance, and vice versa...

Basically, there's like two things happening:

1. When you use low end it's like you drag active materials from the top down to the low end. You see performance gains down low. If you could miraculously, instantaneously transport your usage to the top, however, you would NOT see great performance - there really is no top end any more. You've dragged stuff from the top down low, so as long as you're seeing good performance down low you can't see good performance up high, it's a trade-off.

Of course, you can't just instantaneously go from low usage to high usage; you have to charge back up. The first few charges you won't see good performance up high, it takes several cycles to start seeing good performance up high after you've been using the pack low. Why? Because you're now dragging active materials back up to the 'top', the cycling up high is re-concentrating active materials 'up there'... The same or similar things happen when you go from top usage to bottom usage...

2. When you use low end you probably recondition 'stuff' down there. I think this is a different, distinct process. For example, you burn bad stuff - 'crud', shrink crystals, probably achieve some kind of balancing among cells in the process, etc etc. This isn't the same thing as the 'dragging stuff from high' type of recondition or what have you...

From what I can tell, this number 2 happens early-on, and then you're stuck with the finite, stunted capacity of used, old cells, and all you can do is drag top end down or drag bottom end up, achieving good performance within that window, but never achieving that good performance across the entire 6.5Ah capacity (or maybe 5.2 Ah capacity, as I imagine even new cells can't do this "good" performance, such as 90 amps for 4 seconds or 45 amps indefinitely, across the entire 6.5Ah range)...

So, the question now is, despite being a trade-off, where you can have good performance low or high but not both at the same time, the low end still seems to work better -- Why? Just not something I understand.

Falling back on old, boiler-plate ideas, it seems possible that charging at high currents at relatively high charge states can quickly 'crud-up' the cells -- inducing 'voltage depression'... In general it seems like high charge state is a stressed-state by default, the cell is wound-up. That would be bad for charges, but you'd think it'd be good for discharges. It just doesn't seem to work out that way... Maybe it's a combination: high charge state is stressed, so charging stresses even more, crudding things up, and then discharges end up weak - because the 'crud' gets in the way. But when you charge down low, the cell isn't in such a stressed state, so crud creation is minimal - discharges end up good...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #27 ·
More thoughts on 010 vs. 305 BCM low charge state slope detection and basically other possible 'battery empty' indicators:

I'm thinking it'd be possible or likely that different BCMs have different values for steepness of slope. In this case I'm thinking the 010 might have a shallower slope value, as it seems to trigger sooner, faster, and is allowed to trigger repeatedly compared to the 305. The 305 might have a steeper slope value that's only allowed to trigger twice before disable (neg recal, 'battery empty'); the 010 might have a shallower value that's allowed to trigger over and over, until something else finally signals 'battery empty'. I still haven't gotten a neg recal though with the 010 this time around...

One of the differences I suggested earlier was that the 305 throttles assist strictly due to nominal charge state, but that can be easily circumvented by resetting charge state artificially high, whereas the 010 seems more difficult to do that. But, it might be the other things that end up making it difficult to do that, not the nominal charge state per se. With just a little charging (5-10%), 010 BCM, nominal charge state reset from 28% to 40% yesterday and held over to today, I see no problem freely using assist, that is, until 'Alf' triggers. And even then the throttling isn't immediately heavy. So, it appears that maybe multiple Alf triggers alone end up resulting in heavy throttling, not a low nominal charge state nor say a net amp-hour count that underlies a more real value for charge state...

In other words, when it comes to low end charge state/empty behavior of 010 vs. 305, they appear to be more similar than I was suggesting earlier, where the only major difference might be this multiple Alf trigger, perhaps different slope value idea.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #31 ·
Continuing with the whole 'Alf'/empty behavior line of questioning, I'm seeing things with this 010 BCM that complicate my interpretations to-date of 'what's going on'. I saw at least a couple things today that just don't square...

-In auto-stop, ~1 amp discharge load, near empty, I got a neg recal but never once saw Alf trigger. I was saying earlier that Alf must trigger in response to slope detection. And I also have said that neg recal is a response to slope detection. So, how can both of these be true when I get a neg recal (due to slope detection?) yet never see Alf trigger? In theory, Alf should trigger first, and neg recal should happen shortly thereafter (under load)...

-I had a tap below 14V, well below, at 13.53V, but did not get DCDC disable: Earlier I had said that DCDC must disable when a tap drops below 14V resting or perhaps near resting. But I didn't get DCDC disable. Total voltage was relatively high, above 144V.

I just can't figure out what the logic is here, with Alf, throttling, neg recal, and DCDC disable. It seemed pretty clear, cut and dried with the 305 BCM. But there's something different with this 010 that's mucking things up.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #32 · (Edited)
More, but different, observations...

Generally the same test setup as earlier - auto-stop, DCDC active about 1.2A load, pack on verge of being empty, try to measure tap voltages.

This time, the first time I pulled into my garage and had the car in auto-stop, I lost DCDC very quickly -- DCDC disables, 12V load switches to 12V battery only, and the car doesn't come out of auto-stop. So I had to re-start, back-up, and get it back into auto-stop with DCDC active. I never saw Alf (I could've missed it) nor got a neg recal - just DCDC disable.

This would be relevant to a couple threads I've seen recently about losing auto-stop and 12V warning light...

When I finally got it re-engaged, I decided to try something different. I measured tap voltages under load (~1.2A), but then decided to up the load by turning headlights ON, high beams, etc. I turned headlights ON, then high beams, and the moment I turned high beams ON I saw Alf flip to 1 and the DCDC disabled. No neg recal. Total pack voltage was above 144V...

Not sure what to make of it. Obviously the increased load dropped tap voltages, and it seems clear that, given the high total pack voltage, Alf is responding to tap voltages. The question is whether there's an absolute low voltage threshold that gets crossed when Alf triggers, or if there's a slope calculation taking place. I can't really tell.

The way Alf triggers under a constant load - and the way the MCM must be responding with assist throttling, at least with this 010 BCM, makes me think it goes something like this:

I've seen this now over several trials, where with a near empty pack (but total voltage above 144V) Alf flips 1 then back to zero, and then seconds later it flips 1 again, and so on, as long as the load is held. Each time Alf flips 1 assist is throttled, more. Watching total voltage, seeing the way each throttling event results in total voltage essentially holding steady, I'd have to say that there's a constant slope detection going on, that when the BCM measures a too-steep slope Alf flips 1 and assist is throttled so as to prevent the falling tap from falling too much, too fast.

I don't think it can be an absolute voltage threshold, though, as knowing what I know about the condition of sticks underlying my taps, I've got single cells that drop out early, so total tap voltage remains relatively high, it's just the steepness of the discharge slope that's being detected... When the steepness is detected, assist is throttled/load decreased, tap voltage stops falling as fast. But of course if the single cell is truly near empty, voltage will continue to fall, so as it does the BCM detects the steepness again, Alf flips 1, and assist is throttled even more...

What's 'interesting' with this 010 BCM is that even though I'll see many successive triggers of Alf, I don't see neg recals, not for a while. That's different from the 305. One thing that occurred to me, yesterday, is that this 010 was a discontinued model, apparently because of a 'cold discharging' bug - supposedly, this BCM will allow too much power draw under cold conditions. It makes me wonder whether what I see here, with successive Alf triggers yet no neg recal, no disable, might be related...

In any event, I think I might go back to the 305 BCM. Its 'empty behavior' just seems more cut and dried, easy to predict, easy to work with. With the 010 it seems like there's 'things' at work that might be at cross-purposes with one another, like there's a few different things going on that dictate 'empty behavior', yet they're not all on the same page... As I mentioned earlier, this 010 is a very low serial number computer, probably one of the first. I can imagine that Honda still had some programming bugs to work out when it was released. I can see how a threshold for one algorithm, say one responsible for 'catching empty cells' and/or 'throttling assist', might not be perfectly in sync with a threshold for another algorithm, perhaps one meant to disable the DCDC to reduce the load on the pack, etc. Seems like it'd be easy to not get 'it all' working together just right -- so you can end up with a DCDC disable, yet no neg recal; a neg recal yet no Alf trigger; an Alf trigger yet no forced-charge (I've been seeing that, too); and on and on...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #33 ·
Still thinking about 'empty pack' behavior. Want to document one thing quickly before I forget it, again.

It's actually been kicking around IC for a long time -- the idea that most 'throttling' happens when a tap drops below 13.2V, or 1.1V per cell (1.1V per cell conforms to Panasonic's formula for appropriate discharge level with 12 cells in series). I was reminded of that today when I saw Alf flip 1 and was watching for the absolute voltage threshold, albeit at the pack level.

I recalled an old tap voltage graph that Eli made a long time ago, which shows one tap dropping a little lower than others and current-throttling every time that tap's voltage dropped to about 13.2V. It's not totally cut and dried, as the initial threshold I believe is 12V, and then subsequent throttling is a bit above 13.2V. But shortly thereafter the value does look like 13.2 volts... hmm, but it's not exactly 13.2, it's not exactly the same at each throttling event. The thing is, the slope for the one tap presumably causing throttling doesn't look any steeper than slopes for the other taps -- so it doesn't look like it could be slope detection.

Here's that graph:
85227



In any event, perhaps Alf does have an absolute threshold - about 13.2V. I don't see Alf trigger when I'm discharging the pack in auto-stop, at a low current, because the current is so low, the tap simply never drops below 13.2V. If this is the case, then it means neg recals have a different trigger criterion: Alf might be an absolute voltage, such as 13.2V, and neg recal is slope detection, perhaps among other things.

Maybe it's not exactly 13.2V, 1.1V per cell, but somewhere around that -- depending on what the nominal charge state is. At higher charge states the value is a little higher, and once charge state goes lower the value is lower - 13.2... I imagine there could be different enabling criteria across the whole nominal charge state range, and there does appear to be enabling criteria in terms of current/load, as if you mash the throttle and hold it there I think the Alf/throttling voltage threshold is 12V, not 13.2V... But, I've been mainly trying to hold that stuff constant, focusing on only what happens when artificially circumventing nominal charge state throttling (by resetting SoC high with OBDIIC&C) and by looking at 'Alf' and neg recal only under particular circumstances - at modest <25 amp load and in auto-stop, where load is only about 1 amp...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #34 ·
Just came back here to write something that's akin to what's explained in the above post, not remembering exactly that I already explained it. Basically, I'm pretty sure this is what's going on: Assist limiting (Alf), the real-time variety, is probably a response to absolute low tap voltage of around 13.2V. Neg recal is slope detection.

A loaded low voltage roughly equal to 1.1V per cell would be the time to implement throttling. At different discharge rates and with different pack conditions you could end up with throttling all over the board. For instance:

-at higher loads, a pack or tap with 'high IR' would see voltages plummeting - and throttling would kick-in, even if the pack were at a high charge state.

-Or, you could have an imbalanced pack, where some taps are at say 60% and others are at say 20%, the total pack voltage remains high but even a slight discharge load will drop one tap below 13.2V...

-Or, you could have a voltage depressed pack where at middling charge states even low loads drop a tap's voltage below 13.2V...

BUT, none of this means the pack is empty, not yet. The voltage depression and 'high IR' really complicate potential interpretations of what '13.2V' means. A pack or taps can be charged a lot, but high IR or voltage depression result in saggy voltages - and throttling.

The pack is empty when a cell drops out - that's slope detection. I mean, maybe most cells aren't empty, but with one empty cell the pack is done.

You can have a tap - 12 cells - where 11 cells are say 50% charged and 1 is near empty. The total voltage will hum along at a modest load at a fairly high value. The normal loaded voltage is about 1.25V per cell, so 11 X 1.25V = 13.75V -- well above the Alf trigger point. And we're not even counting the near empty cell.

But being near empty, that cell's voltage is going to tank asap, and once it does, the slope gets steeper, the BCM detects that and - boom - neg recal. And the clincher here is that you would never see Alf trigger - because the tap's voltage never fell below ~13.2V, the 11 remaining cells uphold a voltage well above that level...

Ideally - and something I've been seeing - is you see Alf and neg recal at virtually the same time. Cell voltages are tanking and total tap voltage is below 13.2V... Everything's pretty well matched and balanced.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #35 · (Edited)
^ Alf is probably slope detection, too, not absolute voltage. I think different degrees of slope can trigger different things, such as Alf, neg recal, and DCDC disable, and it also depends on some other contextual stuff, like the nominal state of charge...

Why packs fail(?)

That's not why I'm back here though. Just wanted to jot down some quick thoughts on 'Why packs fail'. I think I've got a pretty good conceptual thing going now, I should make some diagrams but I'm way too hot and got other things to do. So in lieu of that I need to at least get a few ideas down...

I've mentioned bits and pieces of this before, it's not all new. The gist of it is:

-start with how the car manages packs, 120 cell string where any cell-level deviations - most likely faster self discharge in a single cell - can result in that cell draining to near empty before others. Assist will be disabled, BCM will charge the pack.

-Insight NiMH, if not all NiMH, has unusual behavior, they don't operate within a single, fixed voltage range, but rather, that range can vary depending on how they're used, such as whether they're discharged to empty or not. In general, if you discharge a cell to near empty, the operating voltage will increase; if you short cycle high, the operating voltage will 'sag'. The range is roughly 1.2V to 1.37V - so a cell that's short cycled (at least not drained to near-empty, if not cycled at high charge state per se) will drift toward the 1.2V, while a cell discharged near empty will drift toward 1.37V.

-What happens over time and usage is that the cell that 'is charged the least' (i.e. at lowest charge state), such as this faster self discharge cell, develops a higher operating voltage profile. Within the context of OEM management, this means that cell eventually dictates when assist will be disabled due to a near-empty pack (empty cell), and also when it will be considered full - the high voltage of the single cell probably lifts the tap voltage high enough that it ends up being the highest voltage tap as well. Less sure about that part, but it's likely, I think.

-What's more important, though, is that then the other cells end up being used at a pretty high charge state, but probably more importantly, within a narrow charge state window, repeatedly. And probably most importantly, they're never discharged very low, never discharged even remotely close to empty. This increases 'voltage sag' and the general pattern of usage and decline gets worse and worse.

-The low cell gets used at lower and lower charge state, its voltage profile gets high and remains high, maybe higher and higher, up to a point. Meanwhile, the other cells get cycled in an increasingly narrow window, are never discharged much or low, their voltages sag more and more, and all sorts of other forms of degradation set in, more or less akin to 'memory effect', though it's not memory effect per se...

That's pretty much it.

My guess is that slight manufacturing differences and temperature variations in the pack can cause deviations at the get-go, and then the way the pack is designed (120 series cells) and managed (10 taps, single cells can trigger management decisions, etc.) means the BCM can't do anything about it and/or makes things worse.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #36 · (Edited)
Does BCM use relative tap voltages??

Don't have time get too into this, but wanted to quickly jot down something weird and different I saw last night and today that suggests something different about the way the BCM handles neg recals, 'empty', and all that - different than what I've suggested above and elsewhere.

I was having some pack issues, identified the tap through normal methods, pulled that tap, treated the sticks, charged them half way and put them back in the pack. The tap would probably be a little more charged than the others, but its voltage would and was quite a bit higher than the others; it was 16.42V versus about 15.50V.

Weird things were happening when I first tried to start the car, basically immediate neg recal, etc. But I don't want to get into that. I thought I must have had another near empty cell, but that wasn't it.

In a nut shell, after I got things going, I was seeing neg recals and Alf=1 very quickly, it seemed way too soon, at higher voltages, etc. than I would have expected, and when I measured tap voltages under discharge load there was no voltage drop indicating a single empty cell/s.

Last night I parked the car in autostop and was at the neg recal point. I even put a 75W bulb on the pack and still got neg recal. I put a tap short on Tap 7 over night, the high V tap that I had just reinstalled, to bring it down a bit. My ongoing amp-hour accounting put all the taps but 7 at about 30% charged - and typically I wouldn't get a neg recal or premature Alf at such a level, not unless I had a single low cell outlier...

Today I was able to bring the pack down much lower - about 15 points lower, than yesterday. For example, the lowest tap voltage I saw yesterday at neg recal was 14.85V (that's with 75 bulb load, 0.7A). Today it was 14.30V (at -1.2A). Tap voltage drops at light discharge load at time 1 versus time 2 was about the same yesterday and today, about -0.05V across the board.

The only difference today versus yesterday was now Tap 7 voltage was closer to the others, due to driving/usage last night and due to the tap short over night. The autostop discharge voltage drop for tap 7 yesterday was -0.12V versus about -0.21V for the others; today it was -0.03V for Tap 7 versus about -0.05V for the others. So yesterday the voltage drop for Tap 7 was roughly half as big as the drop for others; today it was about the same...

Anyway, point is, I'm wondering if some of the neg recal/empty behavior of the BCM is based on relative voltage drop, rather than absolute slope for each tap, whether a tap can be called 'empty' by the BCM if the relative voltage drops are out of whack/different?? For instance, perhaps the Tap 7 modest drop/discharge voltage curve acts like a baseline, and when the BCM measures steeper drops in the other taps - it calls those empty?? Like, if one tap discharge curve looks a lot flatter than the others, it makes the others look steeper, relatively speaking.

I don't know, I'll have to try to keep this in mind. It seems possible, but I don't quite have it all sussed-out. The only other explanation I can think of is wacky BCM behavior, maybe based on amp-hour counts and nominal state of charge and all that. I don't see anything different though now than what I've seen for the past few years; the only difference is that I don't think I've ever installed a stick pair with a voltage that different than the others. Or maybe I have, absolute voltage differences, but probably not with all of them near the bottom and one in the middle, in terms of effective charge state...

edit: OK, here's a couple charts that show the tap measurement differences today vs. yesterday:
Rectangle Font Slope Parallel Screenshot


B4 is unloaded voltage before trip (grey bars), AS1 and AS2 are measurements in autostop after trip loaded at the labeled value, yesterday I got the neg recal in AS but had stepped away so didn't get the measurement, so I put a 75W bulb on to repeat the process, that's what NR 75W is; same stuff for the lower panel second day. Labeled values on the bars are the change in voltage between AS2 and AS1, i.e. time 1 and time 2. Usually I'd see one tap with a much larger voltage drop, and that indicates an empty cell, and that empty cell causes the neg recal/empty determination, so the theory has gone. But these charts don't show any empty cells - at least, not an empty outlier. At this point they all should be at about a true 15% charge state, tap 7 at about 19%.

As far as BCM behavior goes, the top panel and bottom panel are identical, i.e. the same BCM behavior only much different values, lower day 2. The only data-based difference seems to be the 'big' tap 7 bar day 1 vs. day 2.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #37 ·
Here's a 'Day 3' tap chart like the ones above. Figured I'd post it to complete the sequence. Here you can see how Tap 9 has a near empty cell that triggers the neg recal/empty. The voltage drop/change under autostop load (-0.55V) is larger than drops for other taps. That's the way it should happen, unlike Day 1. This neg recal is a full 25 points of charge lower than the one that happened on Day 1 (something like 13% vs. 38%)...

Rectangle Font Line Parallel Screenshot
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #38 · (Edited)
weird top-end behavior after full grid charge, BCM multiple resets to 75%, etc.

I was messing around with grid charging last couple days and noticed a couple things that strike me as odd, somewhat perplexing, when it comes to how the BCM deals with a truly fully charged pack...

I charged my pack to about 100%, total voltage was about 171-2 when I got in to drive very shortly after the charge (like 15 minutes). First thing is, Rlf - the regen limit flag parameter (on OBDIIC&C) - was triggered and pegged ON. At first glance that doesn't seem too strange, but when you think about it, it seems to say something iron-clad about the management: It has to have a resting voltage-based limit on regen. There's no other parameter that could be monitored and trigger Rlf at this point in time - the car hasn't been started, no current, no assist/regen, etc. Temp was modest...

I took tap voltage measurements at the end of charge, no load. The highest tap was 17.40V, most were around 17.25V. This means the highest that voltage could have been when I actually went to drive was no more than 17.40V, but probably lower, given the total voltage and how voltage settles down pretty quickly when it's that high. Highest tap voltage was probably around 17.20V, so that's probably the value for this presumed regen resting voltage limit...

As I recall the Rlf didn't stick around long, like I think it went away simply after starting the car...

The other thing: I've known the BCM can reset nominal charge state, sometimes repeatedly, to 75% after a grid charge or when you try to 'stuff' the pack with SoC resets and the like. Never thought too much about it. But, this time I realized the BCM must have a set resting voltage threshold for that, too. I can't think of any other explanation. I wish I were looking more closely at pack voltage when this was happening, cuz I don't remember the exact voltage. But, it wasn't all that high. I want to say it was actually pretty low, like lower than 16.80V. Though at least one tap could've been around that...

When I got in to drive, I avoided regen and tried to use assist, I must've drained at least 3 or 4 points total, but the BCM was still resetting nominal charge state from lower value, like 73%, back to 75%. It did this maybe 3 times.

So, this is the same thing as a pos recal, which means pos recals must have a resting voltage set threshold. I guess that's nothing new, but this behavior would seem to absolutely confirm it. I can't think of any other behavior/metric that could be used... I guess at first I was thinking the voltage must have been lower than 168/16.8V, and was asking myself, 'What could the BCM be detecting to make it reset to 75%?' But, now I'm thinking there could have been one tap that was still above 16.80V, maybe... It was lower than 17V though...

I don't know. It seemed like it was lower than 168/16.80V, that's why I was so puzzled. I can't really think of anything that would tip off the BCM to a near fully charged pack, other than a high resting voltage. Maybe there's some kind of 'voltage change analysis' and maybe 'inter-tap voltage analysis' that could do it, but, that would seem overly complex for the task at hand... I'm picturing that tap voltage was below a seemingly high resting voltage, of like ~16.80V, and then asking, How could the BCM know the pack is 'full'? Also keep in mind that I haven't done any regen yet, no charge has taken place...

Yeah, I don't know, I'll probably have to try the same thing again and watch for stuff...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #39 · (Edited)
eq1 declared king of crusty packs, Panasonic 'discharge your cells' dictum

...Anyway, like I said, I'm torn between the 'vindication' I'll get (in my head) when I'm still using an original 2002 OEM pack 5 years from now, yet I've got LTO cells moldering in my closet...
Thought this was funny, had to repost it. It's been like 4 years since that post, but 5 years since I started with this pack, still using it, more or less still performing as well as it did 5 years ago. I don't really feel any vindication though, not even in my head... All the real, deep answers are at the electro-chemistry level, and I don't really have those. Well, I've got a lot of the pieces, just don't have a deep understanding of that stuff.

Also thought this should be re-hashed. After all these years of thinking about this stuff, testing, etc., most Insight battery problems seem to boil down to this.

I was skimming through an old Panasonic NiMH Technical Handbook and came across a rather matter-of-fact statement - something I've seen before yet never fully appreciated - that supports the notion, explained above, that Honda's battery management is most likely flawed, maybe seriously so...

"Discharge characteristics
...As with Ni-Cd batteries, repeated charge and discharge under high discharge cutoff voltage conditions (more than 1.1V per cell) causes a drop in the discharge voltage, which is sometimes accompanied by a simultaneous drop in capacity. Normal discharge characteristics can be restored by charge and discharge to a discharge end voltage down to 1.0V per cell."

Certainly over time and usage our packs degrade due to this: Packs usually only get discharged to the equivalent of 1.1V per cell, cells never see this 'Panasonic restoration voltage' of 1V. It'd actually be worse if you drove at night all the time, where background charge kicks-in at a high charge state, or if you don't use much assist. I also think it can happen very quickly, though, perhaps more so or only with older packs...
 

· Registered
Joined
·
2,484 Posts
Thanks for the Panasonic reference.

I recently had a pack that would throw an IMA code under load, with one channel dropping 1V below the rest during assist. The pack was not responding to the car's normal attempts to top balance the pack. One pack level discharge, performed slowly and only to about 130-140 byV, was enough to make the pack behave normally again. I think the car even started with the IMA and then recharged it, I did not take it very low. (How long this will last is a different question.)

Your Panasonic citation supports this, though I wish they would describe the electrochemical effect at work.

I suspect that the repeated overcharging that the pack does with a cell in this condition eventually damages that cell or others.

I am not sure how Honda could have done this better, since "we are going to hobble your car for two days while we correct this problem" would not be acceptable. Similarly, if they deliberately allowed the IMA voltage to drop during driving to correct the problem, and during this time the driver needs assist to pull through an intersection, that becomes a safety issue.

The only way they could have dealt with this that I can think of is some way to take part of the pack offline to recondition it while leaving enough capacity online to have enough assist to get out of a sticky situation. I don't see how Honda could have done it better without adding a lot more weight or cost.

We have not found a way to do it better, either, without taking the IMA offline, or without a lithium conversion.
 
21 - 40 of 55 Posts
Top